Dissertations / Theses on the topic 'Finance Australia Econometric models'

To see the other types of publications on this topic, follow the link: Finance Australia Econometric models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Finance Australia Econometric models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eadie, Edward Norman. "Small resource stock share price behaviour and prediction." Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09CM/09cme11.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Limkriangkrai, Manapon. "An empirical investigation of asset-pricing models in Australia." University of Western Australia. Faculty of Business, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
[Truncated abstract] This thesis examines competing asset-pricing models in Australia with the goal of establishing the model which best explains cross-sectional stock returns. The research employs Australian equity data over the period 1980-2001, with the major analyses covering the more recent period 1990-2001. The study first documents that existing asset-pricing models namely the capital asset pricing model (CAPM) and domestic Fama-French three-factor model fail to meet the widely applied Merton?s zero-intercept criterion for a well-specified pricing model. This study instead documents that the US three-factor model provides the best description of Australian stock returns. The three US Fama-French factors are statistically significant for the majority of portfolios consisting of large stocks. However, no significant coefficients are found for portfolios in the smallest size quintile. This result initially suggests that the largest firms in the Australian market are globally integrated with the US market while the smallest firms are not. Therefore, the evidence at this point implies domestic segmentation in the Australian market. This is an unsatisfying outcome, considering that the goal of this research is to establish the pricing model that best describes portfolio returns. Given pervasive evidence that liquidity is strongly related to stock returns, the second part of the major analyses derives and incorporates this potentially priced factor to the specified pricing models ... This study also introduces a methodology for individual security analysis, which implements the portfolio analysis, in this part of analyses. The technique makes use of visual impressions conveyed by the histogram plots of coefficients' p-values. A statistically significant coefficient will have its p-values concentrated at below a 5% level of significance; a histogram of p-values will not have a uniform distribution ... The final stage of this study employs daily return data as an examination of what is indeed the best pricing model as well as to provide a robustness check on monthly return results. The daily result indicates that all three US Fama-French factors, namely the US market, size and book-to-market factors as well as LIQT are statistically significant, while the Australian three-factor model only exhibits one significant market factor. This study has discovered that it is in fact the US three-factor model with LIQT and not the domestic model, which qualifies for the criterion of a well-specified asset-pricing model and that it best describes Australian stock returns.
3

Shen, Gensheng University of Ballarat. "The determinants of capital structure in Chinese listed companies." University of Ballarat, 2008. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/12728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional financial theories see capital structure as a result of mainly financial, tax and growth factors (Modigliani & Miller, 1958). But corporate governance theories (Jensen & Meckling, 1976) and business strategy theories (Barton & Gordon, 1988) suggest that ownership structure and ownership concentration, product diversification and asset specificity may also influence capital structure. Focusing on the examination of the determinants of capital structure in Chinese listed companies, this research goes beyond financial factors and considered business strategy and corporate governance approaches, and their impact on capital structure, in a transitioning Chinese context where institutions, expertise and regulatory processes are different to, but converging on, Western approaches. A panel data set of 1,098 Chinese listed companies for the period of 1991 to 2000 was collected from published sources, and conventional and innovative econometric methodologies were used to model a range of relationships between capital structure and its financial and non-financial determinants. The statistical approaches used in this study included Ordinary Least Squares Model and also Linear Mixed Model, which is a powerful tool to examine panel data where independence of explanatory variables is not assumed. The analysis also involved Hox’s model building procedures to measure model fit. The capital structure of listed companies in both the Shenzhen Stock Exchange and the Shanghai Securities Exchange is positively related to a firm’s tax rate, growth and capital intensity and negatively related to a firm’s profit and size. Other financial factors such as tangibility, risk and duration are non-significant. The capital structure of listed companies, particularly in the Shenzhen Stock Exchange, is positively related to product diversification and negatively related to asset specificity. The capital structure of listed companies in the Shanghai Securities Exchange is positively related to government ownership and ownership concentration of the largest shareholder and negatively related to legal person ownership and ownership concentration of the ten largest shareholders. The data and modelling support financial and non-financial determinants of capital structure. In particular, information asymmetry, business diversity and asset specificity have a significant impact on capital structure. In addition the empirical work in the study supports agency cost explanations of debt and equity. Finally the research demonstrates that the two main financial markets in China, Shenzhen and Shanghai, have operated differently but are converging towards a common norm. The research contributes to the general field of capital structure and provides valuable insights into the nature of the Chinese firm and the evolution of the Chinese financial system.
Doctor of Philosophy
4

Shen, Gensheng. "The determinants of capital structure in Chinese listed companies." University of Ballarat, 2008. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/15395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional financial theories see capital structure as a result of mainly financial, tax and growth factors (Modigliani & Miller, 1958). But corporate governance theories (Jensen & Meckling, 1976) and business strategy theories (Barton & Gordon, 1988) suggest that ownership structure and ownership concentration, product diversification and asset specificity may also influence capital structure. Focusing on the examination of the determinants of capital structure in Chinese listed companies, this research goes beyond financial factors and considered business strategy and corporate governance approaches, and their impact on capital structure, in a transitioning Chinese context where institutions, expertise and regulatory processes are different to, but converging on, Western approaches. A panel data set of 1,098 Chinese listed companies for the period of 1991 to 2000 was collected from published sources, and conventional and innovative econometric methodologies were used to model a range of relationships between capital structure and its financial and non-financial determinants. The statistical approaches used in this study included Ordinary Least Squares Model and also Linear Mixed Model, which is a powerful tool to examine panel data where independence of explanatory variables is not assumed. The analysis also involved Hox’s model building procedures to measure model fit. The capital structure of listed companies in both the Shenzhen Stock Exchange and the Shanghai Securities Exchange is positively related to a firm’s tax rate, growth and capital intensity and negatively related to a firm’s profit and size. Other financial factors such as tangibility, risk and duration are non-significant. The capital structure of listed companies, particularly in the Shenzhen Stock Exchange, is positively related to product diversification and negatively related to asset specificity. The capital structure of listed companies in the Shanghai Securities Exchange is positively related to government ownership and ownership concentration of the largest shareholder and negatively related to legal person ownership and ownership concentration of the ten largest shareholders. The data and modelling support financial and non-financial determinants of capital structure. In particular, information asymmetry, business diversity and asset specificity have a significant impact on capital structure. In addition the empirical work in the study supports agency cost explanations of debt and equity. Finally the research demonstrates that the two main financial markets in China, Shenzhen and Shanghai, have operated differently but are converging towards a common norm. The research contributes to the general field of capital structure and provides valuable insights into the nature of the Chinese firm and the evolution of the Chinese financial system.
Doctor of Philosophy
5

Klongkratoke, Pittaya. "Econometric models in foreign exchange market." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7333/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
According to the significance of the econometric models in foreign exchange market, the purpose of this research is to give a closer examination on some important issues in this area. The research covers exchange rate pass-through into import prices, liquidity risk and expected returns in the currency market, and the common risk factors in currency markets. Firstly, with the significant of the exchange rate pass-through in financial economics, the first empirical chapter studies on the degree of exchange rate pass-through into import in emerging economies and developed countries in panel evidences for comparison covering the time period of 1970-2009. The pooled mean group estimation (PMGE) is used for the estimation to investigate the short run coefficients and error variance. In general, the results present that the import prices are affected positively, though incompletely, by the exchange rate. Secondly, the following study addresses the question whether there is a relationship between cross-sectional differences in foreign exchange returns and the sensitivities of the returns to fluctuations in liquidity, known as liquidity beta, by using a unique dataset of weekly order flow. Finally, the last study is in keeping with the study of Lustig, Roussanov and Verdelhan (2011), which shows that the large co-movement among exchange rates of different currencies can explain a risk-based view of exchange rate determination. The exploration on identifying a slope factor in exchange rate changes is brought up. The study initially constructs monthly portfolios of currencies, which are sorted on the basis of their forward discounts. The lowest interest rate currencies are contained in the first portfolio and the highest interest rate currencies are in the last. The results performs that portfolios with higher forward discounts incline to contain higher real interest rates in overall by considering the first portfolio and the last portfolio though the fluctuation occurs.
6

Wongwachara, Warapong. "Essays on econometric errors in quantitative financial economics." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marshall, Peter John 1960. "Rational versus anchored traders : exchange rate behaviour in macro models." Monash University, Dept. of Economics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Enzinger, Sharn Emma 1973. "The economic impact of greenhouse policy upon the Australian electricity industry : an applied general equilibrium analysis." Monash University, Centre of Policy Studies, 2001. http://arrow.monash.edu.au/hdl/1959.1/8383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Emiris, Marina. "Essays on macroeconomics and finance." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Venditti, Fabrizio. "Essays on models with time-varying parameters for forecasting and policy analysis." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/24868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this thesis is the development and the application of econometric models with time-varying parameters in a policy environment. The popularity of these methods has run in parallel with advances in computing power, which has made feasible estimation methods that until the late '90s would have been unfeasible. Bayesian methods, in particular, benefitted from these technological advances, as sampling from complicated posterior distributions of the model parameters became less and less time-consuming. Building on the seminal work by Carter and Kohn (1994) and Jacquier, Polson, and Rossi (1994), bayesian algorithms for estimating Vector Autoregressions (VARs) with drifting coefficients and volatility were independently derived by Cogley and Sargent (2005) and Primiceri (2005). Despite their increased popularity, bayesian methods still suffer from some limitations, from both a theoretical and a practical viewpoint. First, they typically assume that parameters evolve as independent driftless random walks. It is therefore unclear whether the output that one obtains from these estimators is accurate when the model parameters are generated by a different stochastic process. Second, some computational limitations remain as only a limited number of time series can be jointly modeled in this environment. These shortcomings have prompted a new line of research that uses non-parametric methods to estimate random time-varying coefficients models. Giraitis, Kapetanios, and Yates (2014) develop kernel estimators for autoregressive models with random time-varying coefficients and derive the conditions under which such estimators consistently recover the true path of the model coefficients. The method has been suitably adapted by Giraitis, Kapetanios, and Yates (2012) to a multivariate context. In this thesis I make use of both bayesian and non-parametric methods, adapting them (and in some cases extending them) to answer some of the research questions that, as a Central Bank economist, I have been tackling in the past five years. The variety of empirical exercises proposed throughout the work testifies the wide range of applicability of these models, be it in the area of macroeconomic forecasting (both at short and long horizons) or in the investigation of structural change in the relationship among macroeconomic variables. The first chapter develops a mixed frequency dynamic factor model in which the disturbances of both the latent common factor and of the idiosyncratic components have time varying stochastic volatility. The model is used to investigate business cycle dynamics in the euro area, and to perform point and density forecast. The main result is that introducing stochastic volatility in the model contributes to an improvement in both point and density forecast accuracy. Chapter 2 introduces a nonparametric estimation method for a large Vector Autoregression (VAR) with time-varying parameters. The estimators and their asymptotic distributions are available in closed form. This makes the method computationally efficient and capable of handling information sets as large as those typically handled by factor models and Factor Augmented VARs (FAVAR). When applied to the problem of forecasting key macroeconomic variables, the method outperforms constant parameter benchmarks and large Bayesian VARs with time-varying parameters. The tool is also used for structural analysis to study the time-varying effects of oil price innovations on sectorial U.S. industrial output. Chapter 3 uses a bayesian VAR to provide novel evidence on changes in the relationship between the real price of oil and real exports in the euro area. By combining robust predictions on the sign of the impulse responses obtained from a theoretical model with restrictions on the slope of the oil demand and oil supply curves, oil supply and foreign productivity shocks are identified. The main finding is that from the 1980s onwards the relationship between oil prices and euro area exports has become less negative conditional on oil supply shortfalls and more positive conditional on foreign productivity shocks. A general equilibrium model is used to shed some light on the plausible reasons for these changes. Chapter 4 investigates the failure of conventional constant parameter models in anticipating the sharp fall in inflation in the euro area in 2013- 2014. This forecasting failure can be partly attributed to a break in the elasticity of inflation to the output gap. Using structural break tests and non-parametric time varying parameter models this study shows that this elasticity has indeed increased substantially after 2013. Two structural interpretations of this finding are offered. The first is that the increase in the cyclicality of inflation has stemmed from lower nominal rigidities or weaker strategic complementarity in price setting. A second possibility is that real time output gap estimates are understating the amount of spare capacity in the economy. I estimate that, in order to reconcile the observed fall in inflation with the historical correlation between consumer prices and the business cycle, the output gap should be wider by around one third.
11

Lee, Chui-yan, and 李翠恩. "Inflation in Hong Kong: a structuralist interpretation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B4389382X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ng, Fo-chun, and 伍科俊. "Some topics in correlation stress testing and multivariate volatility modeling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis considers two important problems in finance, namely, correlation stress testing and multivariate volatility modeling. Correlation stress testing refers to the correlation matrix adjustment to evaluate potential impact of the changes in correlations under financial crises. Very often, some correlations are explicitly adjusted (core correlations), with the remainder left unspecified (peripheral correlations), although it would be more natural for both core correlations and peripheral correlations to vary. However, most existing methods ignored the potential change in peripheral correlations. Inspiring from this idea, two methods are proposed in which the stress impact on the core correlations is transmitted to the peripheral correlations through the dependence structure of the empirical correlations. The first method is based on a Bayesian framework in which a prior for a population correlation matrix is proposed that gives flexibility in specifying the dependence structure of correlations. In order to increase the rate of convergence, the algorithm of posterior simulation is extended so that two correlations can be updated in one Gibbs sampler step. To achieve this, an algorithm is developed to find the region of two correlations keeping the correlation matrix positive definite given that all other correlations are held fixed. The second method is a Black-Litterman approach applied to correlation matrices. A new correlation matrix is constructed by maximizing the posterior density. The proposed method can be viewed as a two-step procedure: first constructing a target matrix in a data-driven manner, and then regularizing the target matrix by minimizing a matrix norm that reasonably reflects the dependence structure of the empirical correlations. Multivariate volatility modeling is important in finance since variances and covariances of asset returns move together over time. Recently, much interest has been aroused by an approach involving the use of the realized covariance (RCOV) matrix constructed from high frequency returns as the ex-post realization of the covariance matrix of low frequency returns. For the analysis of dynamics of RCOV matrices, the generalized conditional autoregressive Wishart model is proposed. Both the noncentrality matrix and scale matrix of the Wishart distribution are driven by the lagged values of RCOV matrices, and represent two different sources of dynamics, respectively. The proposed model is a generalization of the existing models, and accounts for symmetry and positive definiteness of RCOV matrices without imposing any parametric restriction. Some important properties such as conditional moments, unconditional moments and stationarity are discussed. The forecasting performance of the proposed model is compared with the existing models. Outliers exist in the series of realized volatility which is often decomposed into continuous and jump components. The vector multiplicative error model is a natural choice to jointly model these two non-negative components of the realized volatility, which is also a popular multivariate time series model for other non-negative volatility measures. Diagnostic checking of such models is considered by deriving the asymptotic distribution of residual autocorrelations. A multivariate portmanteau test is then devised. Simulation experiments are carried out to investigate the performance of the asymptotic result in finite samples.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
13

Forrester, David Edward Economics Australian School of Business UNSW. "Market probability density functions and investor risk aversion for the australia-us dollar exchange rate." Awarded by:University of New South Wales. School of Economics, 2006. http://handle.unsw.edu.au/1959.4/27199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis models the Australian-US Dollar (AUD/USD) exchange rate with particular attention being paid to investor risk aversion. Accounting for investor risk aversion in AUD/USD exchange rate modelling is novel, so too is the method used to measure risk aversion in this thesis. Investor risk aversion is measured using a technique developed in Bliss and Panigirtzoglou (2004), which makes use of Probability Density Functions (PDFs) extracted from option markets. More conventional approaches use forward-market pricing or Uncovered Interest Parity. Several methods of estimating PDFs from option and spot markets are examined, with the estimations from currency spot-markets representing an original application of an arbitrage technique developed in Stutzer (1996) to the AUD/USD exchange rate. The option and spot-market PDFs are compared using their first four moments and if estimated judiciously, the spot-market PDFs are found to have similar shapes to the option-market PDFs. So in the absence of an AUD/USD exchange rate options market, spot-market PDFs can act as a reasonable substitute for option-market PDFs for the purpose of examining market sentiment. The Relative Risk Aversion (RRA) attached to the AUD/USD, the US Dollar-Japanese Yen, the US Dollar-Swiss Franc and the US-Canadian Dollar exchange rates is measured using the Bliss and Panigirtzoglou (2004) technique. Amongst these exchange rates, only the AUD/USD exchange rate demonstrates a significant level of investor RRA and only over a weekly forecast horizon. The Bliss and Panigirtzoglou (2004) technique is also used to approximate a time-varying risk premium for the AUD/USD exchange rate. This risk premium is added to the cointegrating vectors of fixed-price and asset monetary models of the AUD/USD exchange rate. An index of Australia???s export commodity prices is also added. The out-of-sample forecasting ability of these cointegrating vectors is tested relative to a random walk using an error-correction framework. While adding the time-varying risk premium improves this forecasting ability, adding export commodity prices does so by more. Further, including both the time-varying risk premium and export commodity prices in the cointegrating vectors reduces their forecasting ability. So the time-varying risk premium is important for AUD/USD exchange rate modelling, but not as important as export commodity prices.
14

Li, Fuchun. "Testing for and dating structural change in econometric models and nonparametric methods in finance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/NQ58145.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hadjiantoni, Stella. "Numerical methods for the recursive estimation of large-scale linear econometric models." Thesis, Queen Mary, University of London, 2015. http://qmro.qmul.ac.uk/xmlui/handle/123456789/27003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recursive estimation is an essential procedure in econometrics which appears in many applications when the underlying dataset or model is modi ed. Data arrive consecutively and thus already estimated models will have to be updated with new available information. Moreover, in many cases, data will have to be deleted from a model in order to remove their effect, either because they are old (obsolete) or because they have been detected to be outliers or extreme values and further investigation is required. The aim of this thesis is to develop numerically stable and computationally efficient methods for the recursive estimation of large-scale linear econometric models. Estimation of multivariate linear models is a computationally costly procedure even for moderate-sized models. In particular, when the model needs to be estimated recursively, its estimation will be even more computationally demanding. Moreover, conventional methods yield often, misleading results. The aim is to derive new methods which effectively utilise previous computations, in order to reduce the high computational cost, and which provide accurate results as well. Novel numerical methods for the recursive estimation of the general linear, the seemingly unrelated regressions, the simultaneous equations, the univariate and multivariate timevarying parameters models are developed. The proposed methods are based on numerically stable strategies which provide accurate and precise results. Moreover, the new methods estimate the unknown parameters of the modi ed model even when the variance covariance matrix is singular.
16

Yin, Jiang Ling. "Financial time series analysis." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

He, Ting. "Three essays in corporate finance and corporate governance." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ji, Inyeob Economics Australian School of Business UNSW. "Essays on testing some predictions of RBC models and the stationarity of real interest rates." Publisher:University of New South Wales. Economics, 2008. http://handle.unsw.edu.au/1959.4/41441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation contains a series of essays that provide empirical evidence for Australia on some fundamental predictions of real business cycle models and on the convergence and persistence of real interest rates. Chapter 1 provides a brief introduction to the issues examined in each chapter and provides an overview of the methodologies that are used. Tests of various basic predictions of standard real business cycle models for Australia are presented in Chapters 2, 3 and 4. Chapter 2 considers the question of great ratios for Australia. These are ratios of macroeconomic variables that are predicted by standard models to be stationary in the steady state. Using time series econometric techniques (unit root tests and cointegration tests) Australia great ratios are examined. In Chapter 3 a more restrictive implication of real business cycle models than the existence of great ratios is considered. Following the methodology proposed by Canova, Finn and Pagan (1994) the equilibrium decision rules for some standard real business cycle are tested on Australian data. The final essay on this topic is presented in Chapter 4. In this chapter a large-country, small-country is used to try and understand the reason for the sharp rise in Australia??s share of world output that began around 1990. Chapter 5 discusses real interest rate linkages in the Pacific Basin region. Vector autoregressive models and bootstrap methods are adopted to study financial linkages between East Asian markets, Japan and US. Given the apparent non-stationarity of real interest rates a related issue is examined in Chapter 6, viz. the persistence of international real interest rates and estimation of their half-life. Half-life is selected as a means of measuring persistence of real rates. Bootstrap methods are employed to overcome small sample issues in the estimation and a non-standard statistical inference methodology (Highest Density Regions) is adopted. Chapter 7 reapplies the High Density Regions methodology and bootstrap half-life estimation to the data used in Chapters 2 and 5. This provides a robustness check on the results of standard unit root tests that were applied to the data in those chapters. Main findings of the thesis are as follows. The long run implications of real business cycle models are largely rejected by the Australia data. This finding holds for both the existence of great ratios and when the explicit decision rules are employed. When the small open economy features of the Australian economy are incorporated in a two country RBC model, a country-specific productivity boom seems to provide a possible explanation for the rise in Australia??s share of world output. The essays that examine real interest rates suggest the following results. Following the East Asian financial crisis in 1997-98 there appears to have been a decline in the importance of Japan in influencing developments in the Pacific Basin region. In addition there is evidence that following the crisis Korea??s financial market became less insular and more integrated with the US. Finally results obtained from the half-life estimators suggest that despite the usual findings from unit root tests, real interest rates may in fact exhibit mean-reversion.
19

Kummerow, Max F. "A paradigm of inquiry for applied real estate research : integrating econometric and simulation methods in time and space specific forecasting models : Australian office market case study." Curtin University of Technology, School of Economics and Finance, 1997. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=11274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Office space oversupply cost Australia billions of dollars during the 1990-92 recession. Australia, the United States, Japan, the U.K., South Africa, China, Thailand, and many other countries have suffered office oversupply cycles. Illiquid untenanted office buildings impair investors capital and cash flows, with adverse effects on macroeconomics, financial institutions, and individuals. This study aims to develop improved methods for medium term forecasting of office market adjustments to inform individual project development decisions and thereby to mitigate office oversupply cycles. Methods combine qualitative research, econometric estimation, system dynamics simulation, and institutional economics. This research operationalises a problem solving research paradigm concept advocated by Ken Lusht. The research is also indebted to the late James Graaskamp, who was successful in linking industry and academic research through time and space specific feasibility studies to inform individual property development decisions. Qualitative research and literature provided a list of contributing causes of office oversupply including random shocks, faulty forecasting methods, fee driven deals, prisoners dilemma game, system dynamics (lags and adjustment times), land use regulation, and capital market issues. Rather than choosing among these, they are all considered to be causal to varying degrees. Moreover, there is synergy between combinations of these market imperfections. Office markets are complex evolving human designed systems (not time invariant) so each cycle has unique historical features. Data on Australian office markets were used to estimate office rent adjustment equations. Simulation models in spreadsheet and system dynamics software then integrate additional information with the statistical results to produce demand, supply, and rent forecasts. Results include ++
models for rent forecasting and models for analysis related to policy and system redesign. The dissertation ends with two chapters on institutional reforms whereby better information might find application to improve market efficiency.Keywords. Office rents, rent adjustment, office market modelling, forecasting, system dynamics.
20

方柏榮 and Pak-wing Fong. "Topics in financial time series analysis: theory and applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31241669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Casas, Villalba Isabel. "Statistical inference in continuous-time models with short-range and/or long-range dependence." University of Western Australia. School of Mathematics and Statistics, 2006. http://theses.library.uwa.edu.au/adt-WU2006.0133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this thesis is to estimate the volatility function of continuoustime stochastic models. The estimation of the volatility of the following wellknown international stock market indexes is presented as an application: Dow Jones Industrial Average, Standard and Poor’s 500, NIKKEI 225, CAC 40, DAX 30, FTSE 100 and IBEX 35. This estimation is studied from two different perspectives: a) assuming that the volatility of the stock market indexes displays shortrange dependence (SRD), and b) extending the previous model for processes with longrange dependence (LRD), intermediaterange dependence (IRD) or SRD. Under the efficient market hypothesis (EMH), the compatibility of the Vasicek, the CIR, the Anh and Gao, and the CKLS models with the stock market indexes is being tested. Nonparametric techniques are presented to test the affinity of these parametric volatility functions with the volatility observed from the data. Under the assumption of possible statistical patterns in the volatility process, a new estimation procedure based on the Whittle estimation is proposed. This procedure is theoretically and empirically proven. In addition, its application to the stock market indexes provides interesting results.
22

Armstrong, Mark. "Pricing in multiproduct firms." Thesis, University of Oxford, 1993. http://ora.ox.ac.uk/objects/uuid:3af11153-479b-48b6-a8ea-3aa2318effb6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is a theoretical analysis of optimal pricing by firms when consumer demands are uncertain. The purpose is to extend the familiar literature on single-product nonlinear pricing in two directions: to cases where the firm is regulated and to the case where the firm produces several products. Chapter 1 embeds these problems into the general setting of models of asymmetric information and, as well as covering existing work on the pricing decisions of firms facing adverse selection, discusses other areas including repeated contracts, auctions, signalling and the uses of what is known as the 'first-order approach'. Chapter 2 analyzes nonlinear pricing by a regulated single-product firm. As an alternative to requiring the firm to offer a given linear tariff two different types of regulation which allow nonlinear pricing are considered, namely, average revenue regulation and optional tariff regulation. Chapter 3 introduces the topic of multiproduct pricing when consumers have differing tastes for the various goods. The important simplifying assumption is that consumers wish to buy either one unit of a good or none at all. There are three main results: if consumers' taste parameters are continuously distributed then the firm will not offer all goods to all consumers; in the symmetric two-good case it is shown that (subject to a kind of 'hazard rate' condition) the firm will offer the bundle of two goods at a discount compared with the charge for the two goods separately; and the pricing strategy when the number of goods becomes large is solved approximately. Chapter 4 relaxes the assumption of unit demands and uses differential methods to analyze the multiproduct nonlinear pricing problem. In the symmetric case when taste parameters are continuously distributed the firm will choose to exclude some low-demand consumers from the market. It is shown that when parameters are independently distributed the firm will wish to introduce a degree of cross-dependence into its tariff. Sufficient conditions for a tariff to be optimal are derived and any tariff which satisfies these conditions necessarily will induce 'pure bundling', so that once a consumer decides to participate in the market at all she will choose to buy all goods. A class of cases is solved explicitly using these sufficient conditions. Since other solutions may be hard to solve analytically, a procedure for numerically generating solutions for the two-good case is described and two more solutions are described.
23

Corres, Stelios. "Essays on the dynamics of qualitive aspects of firms' behavior." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

King, Daniel Jonathan. "Modelling stock return volatility dynamics in selected African markets." Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1006452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stock return volatility has been shown to occasionally exhibit discrete structural shifts. These shifts are particularly evident in the transition from ‘normal’ to crisis periods, and tend to be more pronounced in developing markets. This study aims to establish whether accounting for structural changes in the conditional variance process, through the use of Markov-switching models, improves estimates and forecasts of stock return volatility over those of the more conventional single-state (G)ARCH models, within and across selected African markets for the period 2002-2012. In the univariate portion of the study, the performances of various Markov-switching models are tested against a single-state benchmark model through the use of in-sample goodness-of-fit and predictive ability measures. In the multivariate context, the single-state and Markov-switching models are comparatively assessed according to their usefulness in constructing optimal stock portfolios. It is found that, even after accounting for structural breaks in the conditional variance process, conventional GARCH effects remain important to capturing the heteroscedasticity evident in the data. However, those univariate models which include a GARCH term are shown to perform comparatively poorly when used for forecasting purposes. Additionally, in the multivariate study, the use of Markov-switching variance-covariance estimates improves risk-adjusted portfolio returns when compared to portfolios that are constructed using the more conventional single-state models. While there is evidence that the use of some Markov-switching models can result in better forecasts and higher risk-adjusted returns than those models which include GARCH effects, the inability of the simpler Markov-switching models to fully capture the heteroscedasticity in the data remains problematic.
25

Weiss, Maurício Andrade 1983. "Dinâmica dos fluxos financeiros para os países em desenvolvimento no contexto da globalização financeira." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/286432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientador: Daniela Magalhães Prates
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Economia
Made available in DSpace on 2018-08-25T03:18:03Z (GMT). No. of bitstreams: 1 Weiss_MauricioAndrade_D.pdf: 3099937 bytes, checksum: f12ba1741353723f160b9105d03c2349 (MD5) Previous issue date: 2014
Resumo: Uma das características fundamentais da dinâmica das finanças internacionais no contexto de globalização financeira é a volatilidade dos fluxos de capitais. Essa volatilidade é decorrente da dominância da lógica financeira sobre a produtiva no capitalismo contemporâneo e das atuais características do sistema monetário internacional (SMI). Em períodos de elevado apetite pelo risco, os fluxos de capitais tendem a elevar sua participação nos países em desesnvolvimento. Já nos momentos de elevada preferência por liquidez, esses fluxos migram para os países desenvolvidos, principalmente para os Estados Unidos. Esta tese pretende dar uma contribuição à literatura empírica sobre os determinantes dos fluxos de capitais aos países em desenvolvimento por meio de um modelo econométrico de dados em painel com a utilização de diferentes métodos: mínimos quadrados ordinários (Ordinary Least Squares), efeitos fixos (fixed effects), efeitos aleatórios (random effects), primeira diferença (first difference) e método dos momentos generalizados (Generalized method of moments). Os resultados obtidos contribuíram com os estudos anteriores que apontaram para um predomínio dos fatores externos sobre os internos na determinação dos fluxos de capitais. Merece destaque o indicador de volatilidade VIX CBOE, o qual se mostrou significativo e com sinal esperado nas quinze equações testadas
Abstract: One of the key features of the dynamics of international finance in the context of financial globalization is the volatility of capital flows. This volatility is due to the dominance of the financial over the productive logic of contemporary capitalism and the current characteristics of the international monetary system (IMS). In periods of high risk appetite, capital flows tend to raise its share in developing countries. But in the periods of high liquidity preference, these flows migrate to developed countries, mainly to the United States. This thesis aims to give a contribution to the empirical literature on the determinants of capital flows to developing countries using an econometric panel data model with the use of different methods: ordinary least squares, fixed effects, random effects, first difference and generalized method of moments. The results contributed to earlier studies that showed a predominance of external factors over internal ones in determining capital flows. Also noteworthy is the CBOE VIX volatility indicator, which showed significant and with the expected sign on the fifteen tested equations
Doutorado
Teoria Economica
Doutor em Ciências Econômicas
26

Kotak, Akshay. "Essays on financial intermediation, stability, and regulation." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:112b32a7-fa60-4baa-a325-15e014798cea.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern banking theories provide a host of explanations for the existence of intermediaries, highlight their important influence on economic growth, delineate the risks inherent in the services they provide, and illustrate the market failures and real costs of bank failures that precipitate the need for regulation and oversight of the sector. This thesis is a collection of three essays that looks at three of these key aspects of financial intermediaries - the development of financial intermediaries, the function of the lender of last resort that has emerged as an important part of the safety net afforded to financial intermediaries, and the occurrence of financial crises. The first chapter of this thesis provides an introduction to the academic literature on financial intermediation covering different theories put forward to explain their emergence, and highlighting the risks inherent in their operation. It emphasizes the crucial functions they perform in the economy and makes a case for regulation and oversight of the sector to reduce the incidence and alleviate the effects of financial crises. The second chapter seeks to determine the policy and institutional factors that influence the development of financial institutions as measured across three dimensions - depth, efficiency, and stability. Applying the concept of the financial possibility frontier, developed by Beck and Feyen (2013) and formalized by Barajas et al. (2013b), we determine key policy variables affecting the gap between actual levels of development and benchmarks predicted by structural variables. Our dynamic panel estimation shows that inflation, trade openness, institutional quality, and banking crises significantly affect financial development. We also assess the impact of the policy variables across the different dimensions of development thereby identifying complementarities and potential trade-offs for policy makers. The third chapter models the role of the lender of last resort (LoLR) in a general equilibrium framework. We allow for heterogeneous agents and a risk-averse banking sector, and incorporate the frictions of endogenous default, liquidity, and money. Adverse supply shocks in monetary endowments trigger default, leading to deterioration in the value of bank assets, and subsequent bank illiquidity in some states of the world. LoLR intervention is then assessed with regards to its economy-wide effect on welfare, bank profitability, and the level of default. The results provide a justification for constructive ambiguity. The fourth chapter aims to provide an explanation for the incidence of financial crises by combining insights from agency theory and Minsky's financial instability hypothesis (Minsky, 1992) in a model with endogenous default. Our theoretical model shows that the probability of a financial crisis increases as the quality of shareholder information decreases. We then develop a measure for the quality of shareholder information following Simon (1989) and show that the market-wide quality of shareholder information: i) is poor (with no trend) in the Pre-SEC period (1840 to 1934); ii) improves substantially following the SEC reforms; and iii) gradually declines starting in the 1960s/70s until it is now back to pre-SEC levels. This matches up with the standard list of US financial crises (as in Reinhart and Rogoff 2009; Reinhart 2010) and supports our hypothesis that the likelihood of a financial crisis increases with deterioration in the quality of shareholder information.
27

Okumu, Ibrahim Mike. "Essays on governance, public finance, and economic development." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/5282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is composed of three distinct but related essays. The first essay studies the role of the size of the economy in mitigating the impact of public sector corruption on economic development. The analysis is based on a dynamic general equilibrium model in which growth occurs endogenously through the invention and manufacture of new intermediate goods that are used in the production of output. Potential innovators decide to enter the market considering the fraction of future profits that may be lost to corruption. We find that depending on the number of times bribes are demanded, the size of the economy may be an important factor in determining the effects of corruption on innovation and economic growth. The second essay presents an occupational choice model in which a household can choose either formal or informal entrepreneurship or at the subsistence livelihood. Credit market constraints and initial wealth conditions (bequest) determine an agent's occupational choice. Corruption arises when bureaucrats exchange investment permits for bribes. Corruption worsens credit market constraints. Equilibrium with corruption is characterised by an increase (decrease) in informal (formal) entrepreneurship and a decrease in formal entrepreneurship wealth. Since corruption-induced credit constrained households choose informal entrepreneurship as opposed to subsistence livelihood income in the formal sector, the informal economy is shown to mitigate the extent of income inequality. The third essay explains the role of bureaucratic corruption in undermining public service delivery, public finance, and economic development through incentivising tax evasion. The analysis is based on a dynamic general equilibrium model in which a taxable household observes the quality of public services and decides whether or not to fulfil his tax obligation. Bureaucratic corruption compromises the quality of public services such that a taxable household develops incentives to evade tax payment. We show that corruption-induced tax evasion increases the likelihood of a budget deficit, renders tax payable increase counter-productive, and aggravates the negative effect of bureaucratic corruption on economic development.
28

Geissler, Johannes. "Lower inflation : ways and incentives for central banks." Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/1719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is a technical inquiry into remedies for high inflation. In its center there is the usual tradeoff between inflation aversion on the one hand and some benefit from inflation via Phillips curve effects on the other hand. Most remarkable and pioneering work for us is the famous Barro-Gordon model - see (Barro & Gordon 1983a) respectively (Barro & Gordon 1983b). Parts of this model form the basis of our work here. Though being well known the discretionary equilibrium is suboptimal the question arises how to overcome this. We will introduce four different models, each of them giving a different perspective and way of thinking. Each model shows a (sometimes slightly) different way a central banker might deliver lower inflation than the one shot Barro-Gordon game at a first glance would suggest. To cut a long story short we provide a number of reasons for believing that the purely discretionary equilibrium may be rarely observed in real life. Further the thesis provides new insights for derivative pricing theories. In particular, the potential role of financial markets and instruments will be a major focus. We investigate how such instruments can be used for monetary policy. On the contrary these financial securities have strong influence on the behavior of the central bank. Taking this into account in chapters 3 and 4 we come up with a new method of pricing inflation linked derivatives. The latter to the best of our knowledge has never been done before - (Persson, Persson & Svenson 2006), as one of very view economic works taking into account financial markets, is purely focused on the social planer's problem. A purely game theoretic approach is done in chapter 2 to change the original Barro-Gordon. Here we deviate from a purely rational and purely one period wise thinking. Finally in chapter 5 we model an asymmetric information situation where the central banker faces a trade off between his current objective on the one hand and benefit arising from not perfectly informed agents on the other hand. In that sense the central bank is also concerned about its reputation.
29

Wong, Siu-kei, and 黃紹基. "The performance of property companies in Hong Kong: a style analysis approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B26720401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Malherbe, Frédéric. "Essays on the macroeconomic implications of information asymmetries." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Along this dissertation I propose to walk the reader through several macroeconomic

implications of information asymmetries, with a special focus on financial

issues. This exercise is mainly theoretical: I develop stylized models that aim

at capturing macroeconomic phenomena such as self-fulfilling liquidity dry-ups,

the rise and the fall of securitization markets, and the creation of systemic risk.

The dissertation consists of three chapters. The first one proposes an explanation

to self-fulfilling liquidity dry-ups. The second chapters proposes a formalization

of the concept of market discipline and an application to securitization

markets as risk-sharing mechanisms. The third one offers a complementary

analysis to the second as the rise of securitization is presented as banker optimal

response to strict capital constraints.

Two concepts that do not have unique acceptations in economics play a central

role in these models: liquidity and market discipline.

The liquidity of an asset refers to the ability for his owner to transform it into

current consumption goods. Secondary markets for long-term assets play thus

an important role with that respect. However, such markets might be illiquid due

to adverse selection.

In the first chapter, I show that: (1) when agents expect a liquidity dry-up

on such markets, they optimally choose to self-insure through the hoarding of

non-productive but liquid assets; (2) this hoarding behavior worsens adverse selection and dries up market liquidity; (3) such liquidity dry-ups are Pareto inefficient

equilibria; (4) the government can rule them out. Additionally, I show

that idiosyncratic liquidity shocks à la Diamond and Dybvig have stabilizing effects,

which is at odds with the banking literature. The main contribution of the

chapter is to show that market breakdowns due to adverse selection are highly

endogenous to past balance-sheet decisions.

I consider that agents are under market discipline when their current behavior

is influenced by future market outcomes. A key ingredient for market discipline

to be at play is that the market outcome depends on information that is observable

but not verifiable (that is, information that cannot be proved in court, and

consequently, upon which enforceable contracts cannot be based).

In the second chapter, after introducing this novel formalization of market

discipline, I ask whether securitization really contributes to better risk-sharing:

I compare it with other mechanisms that differ on the timing of risk-transfer. I

find that for securitization to be an efficient risk-sharing mechanism, it requires

market discipline to be strong and adverse selection not to be severe. This seems

to seriously restrict the set of assets that should be securitized for risk-sharing

motive.

Additionally, I show how ex-ante leverage may mitigate interim adverse selection

in securitization markets and therefore enhance ex-post risk-sharing. This

is interesting because high leverage is usually associated with “excessive” risktaking.

In the third chapter, I consider risk-neutral bankers facing strict capital constraints;

their capital is indeed required to cover the worst-case-scenario losses.

In such a set-up, I find that: 1) banker optimal autarky response is to diversify

lower-tail risk and maximize leverage; 2) securitization helps to free up capital

and to increase leverage, but distorts incentives to screen loan applicants properly; 3) market discipline mitigates this problem, but if it is overestimated by

the supervisor, it leads to excess leverage, which creates systemic risk. Finally,

I consider opaque securitization and I show that the supervisor: 4) faces uncertainty

about the trade-off between the size of the economy and the probability

and the severity of a systemic crisis; 5) can generally not set capital constraints

at the socially efficient level.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

31

Lenza, Michèle. "Essays on monetary policy, saving and investment." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis addresses three relevant macroeconomic issues: (i) why

Central Banks behave so cautiously compared to optimal theoretical

benchmarks, (ii) do monetary variables add information about

future Euro Area inflation to a large amount of non monetary

variables and (iii) why national saving and investment are so

correlated in OECD countries in spite of the high degree of

integration of international financial markets.

The process of innovation in the elaboration of economic theory

and statistical analysis of the data witnessed in the last thirty

years has greatly enriched the toolbox available to

macroeconomists. Two aspects of such a process are particularly

noteworthy for addressing the issues in this thesis: the

development of macroeconomic dynamic stochastic general

equilibrium models (see Woodford, 1999b for an historical

perspective) and of techniques that enable to handle large data

sets in a parsimonious and flexible manner (see Reichlin, 2002 for

an historical perspective).

Dynamic stochastic general equilibrium models (DSGE) provide the

appropriate tools to evaluate the macroeconomic consequences of

policy changes. These models, by exploiting modern intertemporal

general equilibrium theory, aggregate the optimal responses of

individual as consumers and firms in order to identify the

aggregate shocks and their propagation mechanisms by the

restrictions imposed by optimizing individual behavior. Such a

modelling strategy, uncovering economic relationships invariant to

a change in policy regimes, provides a framework to analyze the

effects of economic policy that is robust to the Lucas'critique

(see Lucas, 1976). The early attempts of explaining business

cycles by starting from microeconomic behavior suggested that

economic policy should play no role since business cycles

reflected the efficient response of economic agents to exogenous

sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}

and, more recently, King and Rebelo, 1999). This view was challenged by

several empirical studies showing that the adjustment mechanisms

of variables at the heart of macroeconomic propagation mechanisms

like prices and wages are not well represented by efficient

responses of individual agents in frictionless economies (see, for

example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate

some sources of nominal and real rigidities in the DSGE framework

and allow the study of the optimal policy reactions to inefficient

fluctuations stemming from frictions in macroeconomic propagation

mechanisms.

Against this background, the first chapter of this thesis sets up

a DSGE model in order to analyze optimal monetary policy in an

economy with sectorial heterogeneity in the frequency of price

adjustments. Price setters are divided in two groups: those

subject to Calvo type nominal rigidities and those able to change

their prices at each period. Sectorial heterogeneity in price

setting behavior is a relevant feature in real economies (see, for

example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro

Area). Hence, neglecting it would lead to an understatement of the

heterogeneity in the transmission mechanisms of economy wide

shocks. In this framework, Aoki (2001) shows that a Central

Bank maximizing social welfare should stabilize only inflation in

the sector where prices are sticky (hereafter, core inflation).

Since complete stabilization is the only true objective of the

policymaker in Aoki (2001) and, hence, is not only desirable

but also implementable, the equilibrium real interest rate in the

economy is equal to the natural interest rate irrespective of the

degree of heterogeneity that is assumed. This would lead to

conclude that stabilizing core inflation rather than overall

inflation does not imply any observable difference in the

aggressiveness of the policy behavior. While maintaining the

assumption of sectorial heterogeneity in the frequency of price

adjustments, this chapter adds non negligible transaction

frictions to the model economy in Aoki (2001). As a

consequence, the social welfare maximizing monetary policymaker

faces a trade-off among the stabilization of core inflation,

economy wide output gap and the nominal interest rate. This

feature reflects the trade-offs between conflicting objectives

faced by actual policymakers. The chapter shows that the existence

of this trade-off makes the aggressiveness of the monetary policy

reaction dependent on the degree of sectorial heterogeneity in the

economy. In particular, in presence of sectorial heterogeneity in

price adjustments, Central Banks are much more likely to behave

less aggressively than in an economy where all firms face nominal

rigidities. Hence, the chapter concludes that the excessive

caution in the conduct of monetary policy shown by actual Central

Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not

represent a sub-optimal behavior but, on the contrary, might be

the optimal monetary policy response in presence of a relevant

sectorial dispersion in the frequency of price adjustments.

DSGE models are proving useful also in empirical applications and

recently efforts have been made to incorporate large amounts of

information in their framework (see Boivin and Giannoni, 2006). However, the

typical DSGE model still relies on a handful of variables. Partly,

this reflects the fact that, increasing the number of variables,

the specification of a plausible set of theoretical restrictions

identifying aggregate shocks and their propagation mechanisms

becomes cumbersome. On the other hand, several questions in

macroeconomics require the study of a large amount of variables.

Among others, two examples related to the second and third chapter

of this thesis can help to understand why. First, policymakers

analyze a large quantity of information to assess the current and

future stance of their economies and, because of model

uncertainty, do not rely on a single modelling framework.

Consequently, macroeconomic policy can be better understood if the

econometrician relies on large set of variables without imposing

too much a priori structure on the relationships governing their

evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).

Moreover, the process of integration of good and financial markets

implies that the source of aggregate shocks is increasingly global

requiring, in turn, the study of their propagation through cross

country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A

priori, country specific behavior cannot be ruled out and many of

the homogeneity assumptions that are typically embodied in open

macroeconomic models for keeping them tractable are rejected by

the data. Summing up, in order to deal with such issues, we need

modelling frameworks able to treat a large amount of variables in

a flexible manner, i.e. without pre-committing on too many

a-priori restrictions more likely to be rejected by the data. The

large extent of comovement among wide cross sections of economic

variables suggests the existence of few common sources of

fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which

individual variables may display specific features: a shock to the

world price of oil, for example, hits oil exporters and importers

with different sign and intensity or global technological advances

can affect some countries before others (Giannone and Reichlin, 2004). Factor

models mainly rely on the identification assumption that the

dynamics of each variable can be decomposed into two orthogonal

components - common and idiosyncratic - and provide a parsimonious

tool allowing the analysis of the aggregate shocks and their

propagation mechanisms in a large cross section of variables. In

fact, while the idiosyncratic components are poorly

cross-sectionally correlated, driven by shocks specific of a

variable or a group of variables or measurement error, the common

components capture the bulk of cross-sectional correlation, and

are driven by few shocks that affect, through variable specific

factor loadings, all items in a panel of economic time series.

Focusing on the latter components allows useful insights on the

identity and propagation mechanisms of aggregate shocks underlying

a large amount of variables. The second and third chapter of this

thesis exploit this idea.

The second chapter deals with the issue whether monetary variables

help to forecast inflation in the Euro Area harmonized index of

consumer prices (HICP). Policymakers form their views on the

economic outlook by drawing on large amounts of potentially

relevant information. Indeed, the monetary policy strategy of the

European Central Bank acknowledges that many variables and models

can be informative about future Euro Area inflation. A peculiarity

of such strategy is that it assigns to monetary information the

role of providing insights for the medium - long term evolution of

prices while a wide range of alternative non monetary variables

and models are employed in order to form a view on the short term

and to cross-check the inference based on monetary information.

However, both the academic literature and the practice of the

leading Central Banks other than the ECB do not assign such a

special role to monetary variables (see Gali et al. 2004 and

references therein). Hence, the debate whether money really

provides relevant information for the inflation outlook in the

Euro Area is still open. Specifically, this chapter addresses the

issue whether money provides useful information about future

inflation beyond what contained in a large amount of non monetary

variables. It shows that a few aggregates of the data explain a

large amount of the fluctuations in a large cross section of Euro

Area variables. This allows to postulate a factor structure for

the large panel of variables at hand and to aggregate it in few

synthetic indexes that still retain the salient features of the

large cross section. The database is split in two big blocks of

variables: non monetary (baseline) and monetary variables. Results

show that baseline variables provide a satisfactory predictive

performance improving on the best univariate benchmarks in the

period 1997 - 2005 at all horizons between 6 and 36 months.

Remarkably, monetary variables provide a sensible improvement on

the performance of baseline variables at horizons above two years.

However, the analysis of the evolution of the forecast errors

reveals that most of the gains obtained relative to univariate

benchmarks of non forecastability with baseline and monetary

variables are realized in the first part of the prediction sample

up to the end of 2002, which casts doubts on the current

forecastability of inflation in the Euro Area.

The third chapter is based on a joint work with Domenico Giannone

and gives empirical foundation to the general equilibrium

explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found

that domestic saving and investment in OECD countries strongly

comove, contrary to the idea that high capital mobility should

allow countries to seek the highest returns in global financial

markets and, hence, imply a correlation among national saving and

investment closer to zero than one. Moreover, capital mobility has

strongly increased since the publication of Feldstein - Horioka's

seminal paper while the association between saving and investment

does not seem to comparably decrease. Through general equilibrium

mechanisms, the presence of global shocks might rationalize the

correlation between saving and investment. In fact, global shocks,

affecting all countries, tend to create imbalance on global

capital markets causing offsetting movements in the global

interest rate and can generate the observed correlation across

national saving and investment rates. However, previous empirical

studies (see Ventura, 2003) that have controlled for the effects

of global shocks in the context of saving-investment regressions

failed to give empirical foundation to this explanation. We show

that previous studies have neglected the fact that global shocks

may propagate heterogeneously across countries, failing to

properly isolate components of saving and investment that are

affected by non pervasive shocks. We propose a novel factor

augmented panel regression methodology that allows to isolate

idiosyncratic sources of fluctuations under the assumption of

heterogenous transmission mechanisms of global shocks. Remarkably,

by applying our methodology, the association between domestic

saving and investment decreases considerably over time,

consistently with the observed increase in international capital

mobility. In particular, in the last 25 years the correlation

between saving and investment disappears.


Doctorat en sciences économiques, Orientation économie
info:eu-repo/semantics/nonPublished

32

Duong, Lien Thi Hong. "Australian takeover waves : a re-examination of patterns, causes and consequences." UWA Business School, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis provides more precise characterisation of patterns, causes and consequences of takeover activity in Australia over three decades spanning from 1972 to 2004. The first contribution of the thesis is to characterise the time series behaviour of takeover activity. It is found that linear models do not adequately capture the structure of merger activity; a non-linear two-state Markov switching model works better. A key contribution of the thesis is, therefore, to propose an approach of combining a State-Space model with the Markov switching regime model in describing takeover activity. Experimental results based on our approach show an improvement over other existing approaches. We find four waves, one in the 1980s, two in the 1990s, and one in the 2000s, with an expected duration of each wave state of approximately two years. The second contribution is an investigation of the extent to which financial and macro-economic factors predict takeover activity after controlling for the probability of takeover waves. A main finding is that while stock market boom periods are empirically associated with takeover waves, the underlying driver is interest rate level. A low interest rate environment is associated with higher aggregate takeover activity. This relationship is consistent with Shleifer and Vishny (1992)'s liquidity argument that takeover waves are symptoms of lower cost of capital. Replicating the analysis to the biggest takeover market in the world, the US, reveals a remarkable consistency of results. In short, the Australian findings are not idiosyncratic. Finally, the implications for target and bidder firm shareholders are explored via investigation of takeover bid premiums and long-term abnormal returns separately between the wave and non-wave periods. This represents the third contribution to the literature of takeover waves. Findings reveal that target shareholders earn abnormally positive returns in takeover bids and bid premiums are slightly lower in the wave periods. Analysis of the returns to bidding firm shareholders suggests that the lower premiums earned by target shareholders in the wave periods may simply reflect lower total economic gains, at the margin, to takeovers made in the wave periods. It is found that bidding firms earn normal post-takeover returns (relative to a portfolio of firms matched in size and survival) if their bids are made in the non-wave periods. However, bidders who announce their takeover bids during the wave periods exhibit significant under-performance. For mergers that took place within waves, there is no difference in bid premiums and nor is there a difference in the long-run returns of bidders involved in the first half and second half of the waves. We find that none of theories of merger waves (managerial, mis-valuation and neoclassical) can fully account for the Australian takeover waves and their effects. Instead, our results suggest that a combination of these theories may provide better explanation. Given that normal returns are observed for acquiring firms, taken as a whole, we are more likely to uphold the neoclassical argument for merger activity. However, the evidence is not entirely consistent with neo-classical rational models, the under-performance effect during the wave states is consistent with the herding behaviour by firms.
33

Mnjama, Gladys Susan. "Exchange rate pass-through to domestic prices in Kenya." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1002709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In 1993, Kenya liberalised its trade policy and allowed the Kenyan Shillings to freely float. This openness has left Kenya's domestic prices vulnerable to the effects of exchange rate fluctuations. One of the objectives of the Central Bank of Kenya is to maintain inflation levels at sustainable levels. Thus it has become necessary to determine the influence that exchange rate changes have on domestic prices given that one of the major determinants of inflation is exchange rate movements. For this reason, this thesis examines the magnitude and speed of exchange rate pass-through (ERPT) to domestic prices in Kenya. In addition, it takes into account the direction and size of changes in the exchange rates to determine whether the exchange rate fluctuations are symmetric or asymmetric. The thesis uses quarterly data ranging from 1993:Ql - 2008:Q4 as it takes into account the period when the process of liberalization occurred. The empirical estimation was done in two stages. The first stage was estimated using the Johansen (1991) and (1995) co integration techniques and a vector error correction model (VECM). The second stage entailed estimating the impulse response and variance decomposition functions as well as conducting block exogeneity Wald tests. In determining the asymmetric aspect of the analysis, the study followed Pollard and Coughlin (2004) and Webber (2000) frameworks in analysing asymmetry with respect to appreciation and depreciation and large and small changes in the exchange rate to import prices. The results obtained showed that ERPT to Kenya is incomplete but relatively low at about 36 percent in the long run. In terms of asymmetry, the results showed that ERPT is found to be higher in periods of appreciation than depreciation. This is in support of market share and binding quantity constraints theory. In relation to size changes, the results show that size changes have no significant impact on ERPT in Kenya.
34

Savanhu, Tatenda. "Financial liberalization, financial development and economic growth: the case for South Africa." Thesis, Rhodes University, 2012. http://hdl.handle.net/10962/d1006197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Financial liberalization in South Africa was a process that took the form of various legal reforms very a long period of time. This study uses quarterly financial data from 1969 quarter one to 2009 quarter four to analyse this process. The data used was pertinent to the financial liberalization theorem by McKinnon (1973) and Shaw (1973). The examination of the relationships between the various macro economic variables has important implications for effective policy formulation. The empirical analysis is carried out in four phases: the preliminary analysis, the principal component analysis (PCA), the cointegration analysis and pair wise Granger causality tests. The preliminary analysis examines trends over the sample period and reports the on the correlation between the selected variables. The PCA analysis was used to create indexes for financial liberalization, taking into account the phase wise nature of legal reforms. The generated index was representative of the process of financial liberalization from 1969 to 2009. A financial development index was also created using the various traditional measures of financial development and through PCA which investigated interrelationships among the variables according to their common sources of movement. Cointegration analysis is carried out using the Johansen cointegration procedure which investigates whether there is long-run comovement between South African economic growth and the selected macroeconomic variables. Where cointegration is found, Vector Error-Correction Models (VECMs) are estimated in order to examine the short-run adjustments. For robustness, many control variables were added into the model. The results showed that there are positive long run relationships between economic growth and financial liberalization, financial development and a negative relationship with interest rates. The Granger results suggested that the MS hypothesis does not manifest accurately in the South African data. The implications of the results were that financial liberalization has had positive effects on economic growth and thus any impediments to full financial liberalization must be removed albeit with considerations towards employment and local productivity. Financial development also possessed positive long run relationships with economic growth, although results differed based on the financial development proxy used. Thus, financial development must be improved primarily through liberalizing the banking sector and spurring savings.
35

Dominicy, Yves. "Quantile-based inference and estimation of heavy-tailed distributions." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is divided in four chapters. The two first chapters introduce a parametric quantile-based estimation method of univariate heavy-tailed distributions and elliptical distributions, respectively. If one is interested in estimating the tail index without imposing a parametric form for the entire distribution function, but only on the tail behaviour, we propose a multivariate Hill estimator for elliptical distributions in chapter three. In the first three chapters we assume an independent and identically distributed setting, and so as a first step to a dependent setting, using quantiles, we prove in the last chapter the asymptotic normality of marginal sample quantiles for stationary processes under the S-mixing condition.

The first chapter introduces a quantile- and simulation-based estimation method, which we call the Method of Simulated Quantiles, or simply MSQ. Since it is based on quantiles, it is a moment-free approach. And since it is based on simulations, we do not need closed form expressions of any function that represents the probability law of the process. Thus, it is useful in case the probability density functions has no closed form or/and moments do not exist. It is based on a vector of functions of quantiles. The principle consists in matching functions of theoretical quantiles, which depend on the parameters of the assumed probability law, with those of empirical quantiles, which depend on the data. Since the theoretical functions of quantiles may not have a closed form expression, we rely on simulations.

The second chapter deals with the estimation of the parameters of elliptical distributions by means of a multivariate extension of MSQ. In this chapter we propose inference for vast dimensional elliptical distributions. Estimation is based on quantiles, which always exist regardless of the thickness of the tails, and testing is based on the geometry of the elliptical family. The multivariate extension of MSQ faces the difficulty of constructing a function of quantiles that is informative about the covariation parameters. We show that the interquartile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation.

The third chapter consists in constructing a multivariate tail index estimator. In the univariate case, the most popular estimator for the tail exponent is the Hill estimator introduced by Bruce Hill in 1975. The aim of this chapter is to propose an estimator of the tail index in a multivariate context; more precisely, in the case of regularly varying elliptical distributions. Since, for univariate random variables, our estimator boils down to the Hill estimator, we name it after Bruce Hill. Our estimator is based on the distance between an elliptical probability contour and the exceedance observations.

Finally, the fourth chapter investigates the asymptotic behaviour of the marginal sample quantiles for p-dimensional stationary processes and we obtain the asymptotic normality of the empirical quantile vector. We assume that the processes are S-mixing, a recently introduced and widely applicable notion of dependence. A remarkable property of S-mixing is the fact that it doesn't require any higher order moment assumptions to be verified. Since we are interested in quantiles and processes that are probably heavy-tailed, this is of particular interest.


Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

36

Nyasha, Sheilla. "Financial development and economic growth : new evidence from six countries." Thesis, 2014. http://hdl.handle.net/10500/18576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Using 1980 - 2012 annual data, the study empirically investigates the dynamic relationship between financial development and economic growth in three developing countries (South Africa, Brazil and Kenya) and three developed countries (United States of America, United Kingdom and Australia). The study was motivated by the current debate regarding the role of financial development in the economic growth process, and their causal relationship. The debate centres on whether financial development impacts positively or negatively on economic growth and whether it Granger-causes economic growth or vice versa. To this end, two models have been used. In Model 1 the impact of bank- and market-based financial development on economic growth is examined, while in Model 2 it is the causality between the two that is explored. Using the autoregressive distributed lag (ARDL) bounds testing approach to cointegration and error-correction based causality test, the results were found to differ from country to country and over time. These results were also found to be sensitive to the financial development proxy used. Based on Model 1, the study found that the impact of bank-based financial development on economic growth is positive in South Africa and the USA, but negative in the U.K – and neither positive nor negative in Kenya. Elsewhere the results were inconclusive. Market-based financial development was found to impact positively in Kenya, USA and the UK but not in the remaining countries. Based on Model 2, the study found that bank-based financial development Granger-causes economic growth in the UK, while in Brazil they Granger-cause each other. However, in South Africa, Kenya and USA no causal relationship was found. In Australia the results were inconclusive. The study also found that in the short run, market-based financial development Granger-causes economic growth in the USA but that in South Africa and Brazil, the reverse applies. On the other hand bidirectional causality was found to prevail in Kenya in the same period.
Economics
DCOM (Economics)
37

"Essays in monetary theory and finance." 2004. http://library.cuhk.edu.hk/record=b5891997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cheung Ho Sang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 185-187).
Abstracts in English and Chinese.
Curriculum Vitae --- p.ii
Acknowledgments --- p.iii
Abstract --- p.v
Table of Contents --- p.viii
Chapter Chapter 1. --- Introduction --- p.1
Chapter Chapter 2. --- The behavior of income velocity of money --- p.3
Chapter 2.1 --- Introduction --- p.3
Chapter 2.2 --- Literature Review --- p.4
Chapter 2.3 --- Data Description --- p.9
Chapter 2.4 --- Methodology --- p.9
Chapter 2.5 --- Empirical Result --- p.16
Chapter 2.6 --- Conclusion --- p.26
Chapter Chapter 3. --- The behavior of equity premium --- p.106
Chapter 3.1 --- Introduction --- p.106
Chapter 3.1 --- Literature Review --- p.106
Chapter 3.2 --- Data Description --- p.112
Chapter 3.3 --- Methodology --- p.112
Chapter 3.4 --- Empirical Result --- p.120
Chapter 3.5 --- Conclusion --- p.130
Data Appendices --- p.182
Bibliography --- p.185
38

Han, Heejoon. "Econometric analysis of ARCH models with persistent covariates." Thesis, 2006. http://hdl.handle.net/1911/18912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We consider a volatility model, named ARCH-NNH model, that is specifically an ARCH process with a nonlinear function of a persistent, integrated or nearly integrated, explanatory variable. We first establish the asymptotic theories showing that the time series properties of our model successfully describe stylized facts about volatility in financial time series. Due to persistent covariates, the model generates time series showing the long memory property in volatility and leptokurtosis which are commonly observed in speculative return series. Next, we derive the asymptotic distribution theory of the maximum likelihood estimator in our model. In particular, we establish the consistency and asymptotic mixed normality of the maximum likelihood estimator in the model, which ensure the standard inference procedure is valid in our model. Additionally, we conduct misspecification analysis and provide an explanation of the commonly observed IGARCH behavior of financial time series. Our theory shows that the IGARCH behavior could be spurious and could be the result of ignoring persistent covariates in ARCH type models. Finally, we present two empirical applications and a forecast evaluation of our model. Both empirical applications show that our model fits the data very well, and the estimation results confirm the findings of other literature. It is shown that the default premium (the yield spread between Baa and Aaa corporate bonds) predicts stock return volatility, and the interest rate differential between two countries accounts for exchange rate return volatility. The forecast evaluation shows that our model generally performs better than other standard volatility models at relatively lower frequencies.
39

"Nonparametric analysis of hedge ratio: the case of Nikkei Stock Average." 1998. http://library.cuhk.edu.hk/record=b5889511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
by Lee Chi Kau.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 115-119).
Abstract also in Chinese.
ACKNOWLEDGMENTS --- p.iii
LIST OF TABLES --- p.iv
LIST OF ILLUSTRATIONS --- p.vi
CHAPTER
Chapter ONE --- INTRODUCTION --- p.1
Chapter TWO --- THE LITERATURE REVIEW --- p.6
Parametric Models
Nonparametric Estimation Techniques
Chapter THREE --- ANALYTICAL FRAMEWORKS --- p.21
Parametric Models
Nonparametric Models
Chapter FOUR --- EMPIRICAL FINDINGS --- p.36
Data
Estimation Results
Evaluation of Model Performance
Out-of-Sample Forecast and Evaluation
Chapter FIVE --- CONCLUSION --- p.54
TABLES --- p.58
ILLUSTRATIONS --- p.76
BIBLIOGRAPHY --- p.115
40

Padungrat, Teardchart. "Capacity utilization and inflation : international evidence." Thesis, 1995. http://hdl.handle.net/1957/35192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The relevance of domestic and foreign capacity utilization rates in forecasting future inflation rate has been investigated empirically, using five industrialized countries for which the comparable data are available. It has been found that capacity utilization rates, both domestic and foreign, have a long run stable relationship with domestic inflation rate and a positive shock in the capacity utilization rate results in a significant, although a little bit delayed, acceleration in the domestic inflation rate. Various econometric techniques have been used and led to consistent empirical findings. The results in the present study, therefore, dispute the claim that an increase in capacity utilization rate may not necessarily lead to an accelerated inflation down the road.
Graduation date: 1995
41

"Inflation and relative price variability in China: theory and evidence." 2009. http://library.cuhk.edu.hk/record=b5894031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Yuan, Jiang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 48-51).
Abstract also in Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Literature Review --- p.5
Chapter 2.1 --- Theoretical Literature --- p.5
Chapter 2.1.1 --- Menu Cost Model --- p.5
Chapter 2.1.2 --- Signal Extraction Model --- p.6
Chapter 2.1.3 --- Monetary Search Model --- p.7
Chapter 2.2 --- Empirical Literature --- p.8
Chapter Chapter 3 --- Inflation and Relative Price Variability in a Transitional Economy Evidence from China --- p.10
Chapter 3.1 --- Data and Variables --- p.10
Chapter 3.2 --- Inflation Decomposition --- p.19
Chapter 3.3 --- Empirical Evidence --- p.14
Chapter 3.3.1 --- Inflation and RPV --- p.14
Chapter 3.3.2 --- Robustness --- p.18
Chapter 3.4 --- Conclusion --- p.20
Chapter Chapter 4 --- Inflationary Regimes and Relative Prices --- p.22
Chapter 4.1 --- Introduction --- p.22
Chapter 4.2 --- The Change of Inflationary Regimes in China: 1978-2008 --- p.23
Chapter 4.3 --- Inflationary Regimes and Relative Price --- p.25
Chapter 4.3.1 --- Preliminary Evidence --- p.25
Chapter 4.3.2 --- Empirical Evidence --- p.27
Chapter 4.4 --- Structure Change --- p.32
Chapter 4.4.1 --- Endogenous Breakpoint Test --- p.32
Chapter 4.4.2 --- Test Results on the Changing Role of Expected Inflation --- p.33
Chapter 4.5 --- Conclusion --- p.35
Chapter Chapter 5 --- Institutional Cost and Relative Price Variability --- p.36
Chapter 5.1 --- Introduction --- p.36
Chapter 5.2 --- "Institutional Cost, Price Adjustment, and Relative Price Variability in China" --- p.38
Chapter 5.2.1 --- Background --- p.38
Chapter 5.2.2 --- Institutional cost --- p.39
Chapter 5.2.3 --- The relationship between institutional cost and relative price variability --- p.42
Chapter 5.3 --- The Empirical Evidence --- p.43
Chapter 5.4 --- Conclusion --- p.45
Chapter Chapter 6 --- Conclusion and Implications --- p.46
Reference --- p.48
42

Brewbaker, Paul H. "Dynamic models of Hawaiʻi hotel investment." Thesis, 2004. http://proquest.umi.com/pqdweb?index=0&did=765924051&SrchMode=1&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1233272674&clientId=23440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

"An empirical analysis of hedge ratio: the case of Nikkei 225 options." 2001. http://library.cuhk.edu.hk/record=b5890814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lam Suet-man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 111-117).
Abstracts in English and Chinese.
ACKNOWOLEDGMENTS --- p.iii
LIST OF TABLES --- p.iv
LIST OF ILLUSTRATIONS --- p.vi
CHAPTER
Chapter ONE --- INTRODUCTION --- p.1
Chapter TWO --- REVIEW OF THE LITERATURE --- p.6
Parametric Models
Nonparametric Estimation Techniques
Chapter THREE --- METHODOLOGY --- p.21
Parametric Models
Nonparametric Models
Chapter FOUR --- DATA DESCRIPTION --- p.33
Chapter FIVE --- EMPIRICAL FINDINGS --- p.39
Estimation Results
Evaluation of Model Performance
Out-of-sample Forecast Evaluation
Chapter SIX --- CONCLUSION --- p.58
TABLES --- p.62
ILLUSTRATIONS --- p.97
APPENDIX --- p.107
BIBOGRAPHY --- p.111
44

"The impact of default barriers on corporate assets." 2004. http://library.cuhk.edu.hk/record=b5892210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Choi Tsz Wang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 43-45).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Review of Structural Models --- p.5
Chapter 2.1 --- The Merton model --- p.5
Chapter 2.2 --- The default barrier model of Black and Cox --- p.7
Chapter 3 --- Estimating the Merton model --- p.10
Chapter 3.1 --- The Variance Restriction (VR) method --- p.10
Chapter 3.2 --- The Maximum Likelihood estimation (ML) method --- p.12
Chapter 3.3 --- Comparison between VR and ML methods --- p.13
Chapter 4 --- Implications of Using the Proxy in Default Barrier Estimation --- p.15
Chapter 4.1 --- Rejection of SC framework --- p.16
Chapter 4.2 --- Positive barrier implication --- p.17
Chapter 4.3 --- Barier over debt implication --- p.17
Chapter 4.4 --- Numerical illustration --- p.19
Chapter 5 --- The Proposed Framework --- p.22
Chapter 5.1 --- Maximum likelihood estimation --- p.23
Chapter 5.2 --- Barrier-to-debt ratio specification --- p.25
Chapter 5.3 --- Simulation checks --- p.26
Chapter 5.4 --- Comments on the performance of α --- p.29
Chapter 6 --- Estimation with Empirical Data --- p.33
Chapter 6.1 --- Description of data --- p.33
Chapter 6.2 --- Empirical results --- p.35
Chapter 7 --- Conclusion --- p.41
References --- p.43
45

Rodriguez, Arnulfo. "Essays on inflation forecast based rules, robust policies and sovereign debt." Thesis, 2004. http://hdl.handle.net/2152/2174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Evans, Richard William 1975. "Three essays on openness, international pricing, and optimal monetary policy." Thesis, 2008. http://hdl.handle.net/2152/3962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yuan, Kai. "Essays on Liquidity Risk and Modern Market Microstructure." Thesis, 2017. https://doi.org/10.7916/D8FR07W6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Liquidity, often defined as the ability of markets to absorb large transactions without much effect on prices, plays a central role in the functioning of financial markets. This dissertation aims to investigate the implications of liquidity from several different perspectives, and can help to close the gap between theoretical modeling and practice. In the first part of the thesis, we study the implication of liquidity costs for systemic risks in markets cleared by multiple central counterparties (CCPs). Recent regulatory changes are trans- forming the multi-trillion dollar swaps market from a network of bilateral contracts to one in which swaps are cleared through central counterparties (CCPs). The stability of the new framework de- pends on the resilience of CCPs. Margin requirements are a CCP’s first line of defense against the default of a counterparty. To capture liquidity costs at default, margin requirements need to increase superlinearly in position size. However, convex margin requirements create an incentive for a swaps dealer to split its positions across multiple CCPs, effectively “hiding” potential liquidation costs from each CCP. To compensate, each CCP needs to set higher margin requirements than it would in isolation. In a model with two CCPs, we define an equilibrium as a pair of margin schedules through which both CCPs collect sufficient margin under a dealer’s optimal allocation of trades. In the case of linear price impact, we show that a necessary and sufficient condition for the existence of an equilibrium is that the two CCPs agree on liquidity costs, and we characterize all equilibria when this holds. A difference in views can lead to a race to the bottom. We provide extensions of this result and discuss its implications for CCP oversight and risk management. In the second part of the thesis, we provide a framework to estimate liquidity costs at a portfolio level. Traditionally, liquidity costs are estimated by means of single-asset models. Yet such an approach ignores the fact that, fundamentally, liquidity is a portfolio problem: asset prices are correlated. We develop a model to estimate portfolio liquidity costs through a multi-dimensional generalization of the optimal execution model of Almgren and Chriss (1999). Our model allows for the trading of standardized liquid bundles of assets (e.g., ETFs or indices). We show that the benefits of hedging when trading with many assets significantly reduce cost when liquidating a large position. In a “large-universe” asymptotic limit, where the correlations across a large number of assets arise from a relatively few underlying common factors, the liquidity cost of a portfolio is essentially driven by its idiosyncratic risk. Moreover, the additional benefit from trading standardized bundles is roughly equivalent to increasing the liquidity of individual assets. Our method is tractable and can be easily calibrated from market data. In the third part of the thesis, we look at liquidity from the perspective of market microstructure, we analyze the value of limit orders at different queue positions of the limit order book. Many modern financial markets are organized as electronic limit order books operating under a price- time priority rule. In such a setup, among all resting orders awaiting trade at a given price, earlier orders are prioritized for matching with contra-side liquidity takers. In practice, this creates a technological arms race among high-frequency traders and other automated market participants to establish early (and hence advantageous) positions in the resulting first-in-first-out (FIFO) queue. We develop a model for valuing orders based on their relative queue position. Our model identifies two important components of positional value. First, there is a static component that relates to the trade-off at an instant of trade execution between earning a spread and incurring adverse selection costs, and incorporates the fact that adverse selection costs are increasing with queue position. Second, there is also a dynamic component, that captures the optionality associated with the future value that accrues by locking in a given queue position. Our model offers predictions of order value at different positions in the queue as a function of market primitives, and can be empirically calibrated. We validate our model by comparing it with estimates of queue value realized in backtesting simulations using marker-by-order data, and find the predictions to be accurate. Moreover, for some large tick-size stocks, we find that queue value can be of the same order of magnitude as the bid-ask spread. This suggests that accurate valuation of queue position is a necessary and important ingredient in considering optimal execution or market-making strategies for such assets.
48

Shilongo, Fillemon. "An econometric analysis of the impact of imports on inflation in Namibia." Diss., 2019. http://hdl.handle.net/10500/26869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study investigated the impact of import prices on inflation in Namibia, using quarterly time series data over the period 1998Q2-2017Q4. The variables used in the study are inflation rate, M2, real GDP and import prices. The study found that all the variables are integrated of order one (1), and upon testing for cointegration using Johansen test, there was no cointegration. Therefore, the model was analysed using ordinary least squares (OLS) techniques of vector autoregression (VAR) approach, granger causality test and the impulse response function. The results of the study revealed that import prices granger causes inflation at 1% level of significance. Inflation is also granger caused by real GDP and broad money supply (M2) does not Granger cause inflation. The study further revealed that the shocks to import prices are significant in explaining variation in inflation both in the short run and in the long term.
Economics
M. Com. (Economics)
49

"Three essays on financial econometrics." 2013. http://library.cuhk.edu.hk/record=b5549821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
本文由三篇文章構成。首篇是關於多維變或然分佈預測的檢驗。第三篇是關於非貝斯結構性轉變的VAR 模型。或然分佈預測的檢驗是基於檢驗PIT(probability integral transformation) 序的均勻份佈性質與獨性質。第一篇文章基於Clements and Smith (2002) 的方法提出新的位置正變換。這新的變換改善原有的對稱問題,以及提高檢驗的power。第二篇文章建對於多變或然分佈預測的data-driven smooth 檢驗。通過蒙特卡模擬,本文驗證這種方法在小樣本下的有效性。在此之前,由於高維模型的複雜性,大部分的研究止於二維模型。我們在文中提出有效的方法把多維變換至單變。蒙特卡模擬實驗,以及在組融據的應用中,都證實這種方法的優勢。最後一篇文章提出非貝斯結構性轉變的VAR 模型。在此之前,Chib(1998) 建的貝斯結構性轉變模型須要預先假定構性轉變的目。因此他的方法須要比較同構性轉變目模型的優。而本文提出的stick-breaking 先驗概,可以使構性轉變目在估計中一同估計出。因此我們的方法具有robust 之性質。通過蒙特卡模擬,我們考察存在著四個構性轉變的autoregressive VAR(2) 模型。結果顯示我們的方法能準確地估計出構性轉變的發生位置。而模型中的65 個估計都十分接近真實值。我們把這方法應用在多個對沖基回報序。驗測出的構性轉變位置與市場大跌的時段十分吻合。
This thesis consists of three essays on financial econometrics. The first two essays are about multivariate density forecast evaluations. The third essay is on nonparametric Bayesian change-point VAR model. We develop a method for multivariate density forecast evaluations. The density forecast evaluation is based on checking uniformity and independence conditions of the probability integral transformation of the observed series in question. In the first essay, we propose a new method which is a location-adjusted version of Clements and Smith (2002) that corrects asymmetry problem and increases testing power. In the second essay, we develop a data-driven smooth test for multivariate density forecast evaluation and show some evidences on its finite sample performance using Monte Carlo simulations. Previous to our study, most of the works are up to bivariate model as it is difficult to evaluate with the existing methods. We propose an efficient dimensional reduction approach to reduce the dimension of multivariate density evaluation to a univariate one. We perform various Monte Carlo simulations and two applications on financial asset returns which show that our test performs well. The last essay proposes a nonparametric extension to existing Bayesian change-point model in a multivariate setting. Previous change-point model of Chib (1998) requires specification of the number of change points a priori. Hence a posterior model comparison is needed for di erent change-point models. We introduce the stick-breaking prior to the change-point process that allows us to endogenize the number of change points into the estimation procedure. Hence, the number of change points is simultaneously determined with other unknown parameters. Therefore our model is robust to model specification. We preform a Monte Carlo simulation of bivariate vector autoregressive VAR(2) process which is subject to four structural breaks. Our model estimate the break locations with high accuracy and the posterior estimates of the 65 parameters are closed to the true values. We apply our model to various hedge fund return processes and the detected change points coincide with market crashes.
Detailed summary in vernacular field only.
Ko, Iat Meng.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 176-194).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Multivariate Density Forecast Evaluation: A Modified Approach --- p.7
Chapter 2.1 --- Introduction --- p.7
Chapter 2.2 --- Evaluating Density Forecasts --- p.13
Chapter 2.3 --- Monte Carlo Simulations --- p.18
Chapter 2.3.1 --- Bivariate normal distribution --- p.19
Chapter 2.3.2 --- The Ramberg distribution --- p.21
Chapter 2.3.3 --- Student’s t and uniform distributions --- p.24
Chapter 2.4 --- Empirical Applications --- p.24
Chapter 2.4.1 --- AR model --- p.25
Chapter 2.4.2 --- GARCH model --- p.27
Chapter 2.5 --- Conclusion --- p.29
Chapter 3 --- Multivariate Density Forecast Evaluation: Smooth Test Approach --- p.39
Chapter 3.1 --- Introduction --- p.39
Chapter 3.2 --- Exponential Transformation for Multi-dimension Reduction --- p.47
Chapter 3.3 --- The Smooth Test --- p.56
Chapter 3.4 --- The Data-Driven Smooth Test Statistic --- p.66
Chapter 3.4.1 --- Selection of K --- p.66
Chapter 3.4.2 --- Choosing p of the Portmanteau based test --- p.69
Chapter 3.5 --- Monte Carlo Simulations --- p.70
Chapter 3.5.1 --- Multivariate normal and Student’s t distributions --- p.71
Chapter 3.5.2 --- VAR(1) model --- p.74
Chapter 3.5.3 --- Multivariate GARCH(1,1) Model --- p.78
Chapter 3.6 --- Density Forecast Evaluation of the DCC-GARCH Model in Density Forecast of Spot-Future returns and International Equity Markets --- p.80
Chapter 3.7 --- Conclusion --- p.87
Chapter 4 --- Stick-Breaking Bayesian Change-Point VAR Model with Stochastic Search Variable Selection --- p.111
Chapter 4.1 --- Introduction --- p.111
Chapter 4.2 --- The Bayesian Change-Point VAR Model --- p.116
Chapter 4.3 --- The Stick-breaking Process Prior --- p.120
Chapter 4.4 --- Stochastic Search Variable Selection (SSVS) --- p.121
Chapter 4.4.1 --- Priors on Φ[subscript j] = vec(Φ[subscript j]) = --- p.122
Chapter 4.4.2 --- Prior on Σ[subscript j] --- p.123
Chapter 4.5 --- The Gibbs Sampler and a Monte Carlo Simulation --- p.123
Chapter 4.5.1 --- The posteriors of ΦΣ[subscript j] and Σ[subscript j] --- p.123
Chapter 4.5.2 --- MCMC Inference for SB Change-Point Model: A Gibbs Sampler --- p.126
Chapter 4.5.3 --- A Monte Carlo Experiment --- p.128
Chapter 4.6 --- Application to Daily Hedge Fund Return --- p.130
Chapter 4.6.1 --- Hedge Funds Composite Indices --- p.132
Chapter 4.6.2 --- Single Strategy Hedge Funds Indices --- p.135
Chapter 4.7 --- Conclusion --- p.138
Chapter A --- Derivation and Proof --- p.166
Chapter A.1 --- Derivation of the distribution of (Z₁ - EZ₁) x (Z₂ - EZ₂) --- p.166
Chapter A.2 --- Derivation of limiting distribution of the smooth test statistic without parameter estimation uncertainty ( θ = θ₀) --- p.168
Chapter A.3 --- Proof of Theorem 2 --- p.170
Chapter A.4 --- Proof of Theorem 3 --- p.172
Chapter A.5 --- Proof of Theorem 4 --- p.174
Chapter A.6 --- Proof of Theorem 5 --- p.175
Bibliography --- p.176
50

Cotton, Christopher David. "Low Inflation: Potential Causes, Effects and Solutions." Thesis, 2019. https://doi.org/10.7916/d8-tg4q-7n86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
My dissertation focuses upon low inflation. Many developed countries, especially Japan and the Eurozone, have recently experienced prolonged periods of below-target inflation. This has been blamed for many economic ills including worsening the Great Recession and generating a slow recovery, making monetary policy ineffective and leading to lower labor market flexibility. I study what has caused low inflation, its potential effects and how it could be prevented. In Chapter 1, I look at how effective raising the inflation target would be in mitigating the problems of low inflation. Many economists have proposed raising the inflation target to reduce the probability of hitting the zero lower bound (ZLB). It is both widely assumed and a feature of standard models that raising the inflation target does not impact the equilibrium real rate. I demonstrate that once heterogeneity is introduced, raising the inflation target causes the equilibrium real rate to fall in the New Keynesian model. This implies that raising the inflation target will increase the nominal interest rate by less than expected and thus will be less effective in reducing the probability of hitting the ZLB. The channel is that a rise in the inflation target lowers the average markup by price rigidities and a fall in the average markup lowers the equilibrium real rate by household heterogeneity which could come from overlapping generations or idiosyncratic labor shocks. Raising the inflation target from 2% to 4% lowers the equilibrium real rate by 0.38 percentage points in my baseline calibration. I also analyse the optimal inflation level and provide empirical evidence in support of the model mechanism. In Chapter 2, I study to what degree the recent fall in inflation can explain the rise in firm profitability which has been blamed for a rise in inequality. A theoretical relationship between inflation and profitability is known to exist. I investigate the degree to which the recent fall in inflation can explain the rise in firm profitability. My three primary findings are: 1. The negative relationship between inflation and profitability does not hinge upon the Calvo assumption. Raising inflation significantly lowers profitability under all common price rigidities. The relationship can actually be significantly stronger under menu costs. 2. A rise in the degree to which firms discount the future magnifies the effect; a rise in elasticity of substitution can increase or decrease the effect depending upon the price rigidity. 3. The profit share has risen by around 3.5p.p. since the 1990s. In a richer model with firm heterogeneity, the recent fall in inflation is estimated to explain 14% of the rise. This can increase to 29% if firms are allowed to discount the future by more in line with estimates from the finance literature. I also provide empirical evidence for the negative relationship between inflation and firm profits. In Chapter 3, I examine whether behavioral features can help to explain why some countries have persistently experienced low inflation at the zero lower bound. Economists are keen to introduce behavioral assumptions into modern macroeconomic models. A popular framework for doing so is sparse dynamic programming, which assumes that agents partly base their expectations upon a default model which is typically the steady state. This means agents' expectations will be wrong if there are long-run deviations from the default model and assumes agents can compute the default. I introduce an alternative form of sparse dynamic programming which tackles these problems by allowing for long-run updating to the behavioral part of agents' expectations. I apply this to derive a long-run behavioral New Keynesian model. Within this model, fixed interest rates yield indeterminacy and the costs of remaining at the zero lower bound are unbounded. These results are very different to a behavioral New Keynesian model based upon standard sparse dynamic programming, which can yield determinacy under fixed interest rates and bounded costs of the zero lower bound.

To the bibliography