To see the other types of publications on this topic, follow the link: Investment analysis – Mathematical models.

Dissertations / Theses on the topic 'Investment analysis – Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Investment analysis – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ipperciel, David. "The performance of some new technical signals for investment timing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0028/NQ50190.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yuksel, Hasan Zafer. "Performance measures: Traditional versus new models." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3086.

Full text
Abstract:
The thesis analyzed the performance of 5,987 mutual funds using a database called Steele Mutual Fund Experts and compared the predicting ability of various measures of performance. The measures discussed in the thesis are Treynor Ratio, Sharpe Ratio, Jensen's Alpha, Graham-Harvey-1 (GH-1), and Graham-Harvey-2 (GH-2). The performance measures are mostly used by professional money managers and scholars for literary purposes.
APA, Harvard, Vancouver, ISO, and other styles
3

Sohn, SugJe. "Modeling and Analysis of Production and Capacity Planning Considering Profits, Throughputs, Cycle Times, and Investment." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5083.

Full text
Abstract:
This research focuses on large-scale manufacturing systems having a number of stations with multiple tools and product types with different and deterministic processing steps. The objective is to determine the production quantities of multiple products and the tool requirements of each station that maximizes net profit while satisfying strategic constraints such as cycle times, required throughputs, and investment. The formulation of the problem, named OptiProfit, is a mixed-integer nonlinear programming (MINLP) with the stochastic issues addressed by mean-value analysis (MVA) and queuing network models. Observing that OptiProfit is an NP-complete, nonconvex, and nonmonotonic problem, the research develops a heuristic method, Differential Coefficient Based Search (DCBS). It also performs an upper-bound analysis and a performance comparison with six variations of Greedy Ascent Procedure (GAP) heuristics and Modified Simulated Annealing (MSA) in a number of randomized cases. An example problem based on a semiconductor manufacturing minifab is modeled as an OptiProfit problem and numerically analyzed. The proposed methodology provides a very good quality solution for the high-level design and operation of manufacturing facilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Dharmawan, Komang School of Mathematics UNSW. "Superreplication method for multi-asset barrier options." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/30169.

Full text
Abstract:
The aim of this thesis is to study multi-asset barrier options, where the volatilities of the stocks are assumed to define a matrix-valued bounded stochastic process. The bounds on volatilities may represent, for instance, the extreme values of the volatilities of traded options. As the volatilities are not known exactly, the value of the option can not be determined. Nevertheless, it is possible to calculate extreme values. We show that these values correspond to the best and the worst case scenarios of the future volatilities for short positions and long positions in the portfolio of the options. Our main tool is the equivalence of the option pricing and a certain stochastic control problem and the resulting concept of superhedging. This concept has been well known for some time but never applied to barrier options. First, we prove the dynamic programming principle (DPP) for the control problem. Next, using rather standard arguments we derive the Hamilton-Jacobi-Bellman equation for the value function. We show that the value function is a unique viscosity solution of the Hamilton-Jacobi-Bellman equation. Then we define the super price and superhedging strategy for the barrier options and show equivalence with the control problem studied above. The superprice price can be found by solving the nonlinear Hamilton-Jacobi-Equation studied above. It is called sometimes the Black-Scholes-Barenblatt (BSB) equation. This is the Hamilton-Jacobi-Bellman equation of the exit control problem. The sup term in the BSB equation is determined dynamically: it is either the upper bound or the lower bound of the volatility matrix, according to the convexity or concavity of the value function with respect to the stock prices. By utilizing a probabilistic approach, we show that the value function of the exit control problem is continuous. Then, we also obtain bounds for the first derivative of the value function with respect to the space variable. This derivative has an important financial interpretation. Namely, it allows us to define the superhedging strategy. We include an example: pricing and hedging of a single-asset barrier option and its numerical solution using the finite difference method.
APA, Harvard, Vancouver, ISO, and other styles
5

Soucik, Victor. "Finding the true performance of Australian managed funds." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2002. https://ro.ecu.edu.au/theses/730.

Full text
Abstract:
When making conclusions about the performance of managed funds, it is critical that the framework in which such performance is measured provides an accurate and unbiased environment. In this thesis I search for true performance of the two major classes of funds- equity as well as fixed interest managed funds. Focusing, first on the former class, I examine five measurement models across three risk-free proxies, nine benchmarks proposed by the extant literature (covering conditional and unconditional as well as single and multi factor definitions) and over three independent periods in an effort to identity (in a consistent setting) the most accurate and least biast methodology. I also use the Australian dataset, which inherently mitigates any data biases that may potentially afflict US studies of these methodologies, since these were developed from the same dataset on which they were later tested. Not finding a pre-existing benchmark that is objective yet informative, I develop an independent model that satisfies these, sourcing from fifteen factor candidates across four categories. I find that teaming up a fund based market factor with well-defined proxies for size, value, momentum and conditional dividend yield provides the optimal benchmark. The latter class comprising fixed-interest managed funds is a segment left largely unexplored in the financial literature and neglected outright in the Australian context. I examine three risk-free proxies, six benchmark classes encompassing twenty-one potential factors, across five models and two independent time frames in an effort to establish the most informative and least biased setting. The task is complicated by two issues - an acute lack of Australian data (demanding additional bootstrap simulations and bridging tests with the US markets) and the need for a two-pass (time-series and cross-sectional) analysis, arising from the different information content benchmarks carry in these two dimensions. My results, consistent across time, show that a correct combination of a bond market variable, a mixture of interest rate factors and economic factors as well as the proxy for movements in the equity markets yield the optimal benchmark. Both fund classes point to Jensen's Alpha as the preferred model, but Treynor and Mazuy's definition of a quadratic measure is adequate if timing-selectivity separation is required. Neither class is significantly sensitive to the choice of risk free proxy featuring in the performance measures.
APA, Harvard, Vancouver, ISO, and other styles
6

Chan, Yin-ting, and 陳燕婷. "Topics on actuarial applications of non-linear time series models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B32002099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hongqing. "An Empirical Study on the Jump-diffusion Two-beta Asset Pricing Model." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/1325.

Full text
Abstract:
This dissertation focuses on testing and exploring the usage of the jump-diffusion two-beta asset pricing model. Daily and monthly security returns from both NYSE and AMEX are employed to form various samples for the empirical study. The maximum likelihood estimation is employed to estimate parameters of the jump-diffusion processes. A thorough study on the existence of jump-diffusion processes is carried out with the likelihood ratio test. The probability of existence of the jump process is introduced as an indicator of "switching" between the diffusion process and the jump process. This new empirical method marks a contribution to future studies on the jump-diffusion process. It also makes the jump-diffusion two-beta asset pricing model operational for financial analyses. Hypothesis tests focus on the specifications of the new model as well as the distinction between it and the conventional capital asset pricing model. Both parametric and non-parametric tests are carried out in this study. Comparing with previous models on the risk-return relationship, such as the capital asset pricing model, the arbitrage pricing theory and various multi-factor models, the jump-diffusion two-beta asset pricing model is simple and intuitive. It possesses more explanatory power when the jump process is dominant. This characteristic makes it a better model in explaining the January effect. Extra effort is put in the study of the January Effect due to the importance of the phenomenon. Empirical findings from this study agree with the model in that the systematic risk of an asset is the weighted average of both jump and diffusion betas. It is also found that the systematic risk of the conventional CAPM does not equal the weighted average of jump and diffusion betas.
APA, Harvard, Vancouver, ISO, and other styles
8

Niklewski, Jacek. "Multivariate GARCH and portfolio optimisation : a comparative study of the impact of applying alternative covariance methodologies." Thesis, Coventry University, 2014. http://curve.coventry.ac.uk/open/items/a8d7bf49-198d-49f2-9894-12e22ce2d7f1/1.

Full text
Abstract:
This thesis investigates the impact of applying different covariance modelling techniques on the efficiency of asset portfolio performance. The scope of this thesis is limited to the exploration of theoretical aspects of portfolio optimisation rather than developing a useful tool for portfolio managers. Future work may entail taking the results from this work further and producing a more practical tool from a fund management perspective. The contributions made by this thesis to the knowledge of the subject are that it extends literature by applying a number of different covariance models to a unique dataset that focuses on the 2007 global financial crisis. The thesis also contributes to the literature as the methodology applied also enables a distinction to be made in respect to developed and emerging/frontier regional markets. This has resulted in the following findings: First, it identifies the impact of the 2007–2009 financial crisis on time-varying correlations and volatilities as measured by the dynamic conditional correlation model (Engle 2002). This is examined from the perspective of a United States (US) investor given that the crisis had its origin in the US market. Prima facie evidence is found that economic structural adjustment has resulted in long-term increases in the correlation between the US and other markets. In addition, the magnitude of the increase in correlation is found to be greater in respect to emerging/frontier markets than in respect to developed markets. Second, the long-term impact of the 2007–2009 financial crisis on time-varying correlations and volatilities is further examined by comparing estimates produced by different covariance models. The selected time-varying models (DCC, copula DCC, GO-GARCH: MM, ICA, NLS, ML; EWMA and SMA) produce statistically significantly different correlation and volatility estimates. This finding has potential implication for the estimation of efficient portfolios. Third, the different estimates derived using the selected covariance models are found to have a significant impact on the calculated weights and turnovers of efficient portfolios. Interestingly, however, there was no significant difference between their respective returns. This is the main finding of the thesis, which has potentially very important implications for portfolio management.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Zilin, and 周紫麟. "Properties of analysts' earnings forecasts: the case of Hong Kong litsted local and Chinese companies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45597467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Taniai, Hiroyuki. "Inference for the quantiles of ARCH processes." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210305.

Full text
Abstract:
Ce travail se compose de trois parties consacrées à différents aspects des modèles ARCH (AutoRegressive Conditionally Heteroskedastic) quantiles. Dans ces modèles, l’hétéroscédasticité conditionnelle est à prendre dans un sens très large, et affecte de fa¸ con potentiellement différenciée tous les quantiles conditionnels (et donc la loi conditionnelle elle-même), et non seulement, comme dans les modèles ARCH classiques, l’échelle conditionnelle.

La première partie étudie les problèmes de Value-at-Risk (VaR) dans les séries financières ainsi modélisées. Les approches traditionnelles présentent une caractéristique discutable, que nous relevons, et à laquelle nous apportons une correction fondée sur les lois résiduelles. Nous pensons que les fondements de cette nouvelle approche sont plus solides, et permettent de prendre en compte le fait que le comportement des processus empiriques résiduels (REP) des processus ARCH, contrairement à celui des REP des processus ARMA, continue à dépendre de certains des paramètres du modèle.

La seconde partie approfondit l’étude générale des processus empiriques résiduels (REP) des processus ARCH dans l’optique de la régression quantile (QR) au sens de Koenker et Bassett (Econometrica 1978). La représentation de Bahadur des estimateurs QR, et dont découle la propriété de tension asymptotique des REP, est établie.

Finalement, dans la troisième partie, nous mettons en évidence la nature semi-paramétrique des modèles ARCH quantiles, et l’invariance, sous l’action de certains groupes de transforma-tions, des sous-modèles obtenus en fixant la valeur des paramètres. Cette structure de groupe permet la construction de méthodes d’inférence invariantes qui, dans l’esprit des résultats de Hallin and Werker (Bernoulli 2003) préservent l’optimalité au sens semi-paramétrique. Ces méthodes sont fondées sur les rangs et les signes résiduels. Nous développons en particulier les R-estimateurs des modèles considérés et étudions leurs performances.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
11

Dicks, Anelda. "Value at risk and expected shortfall : traditional measures and extreme value theory enhancements with a South African market application." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85674.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Accurate estimation of Value at Risk (VaR) and Expected Shortfall (ES) is critical in the management of extreme market risks. These risks occur with small probability, but the financial impacts could be large. Traditional models to estimate VaR and ES are investigated. Following usual practice, 99% 10 day VaR and ES measures are calculated. A comprehensive theoretical background is first provided and then the models are applied to the Africa Financials Index from 29/01/1996 to 30/04/2013. The models considered include independent, identically distributed (i.i.d.) models and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) stochastic volatility models. Extreme Value Theory (EVT) models that focus especially on extreme market returns are also investigated. For this, the Peaks Over Threshold (POT) approach to EVT is followed. For the calculation of VaR, various scaling methods from one day to ten days are considered and their performance evaluated. The GARCH models fail to converge during periods of extreme returns. During these periods, EVT forecast results may be used. As a novel approach, this study considers the augmentation of the GARCH models with EVT forecasts. The two-step procedure of pre-filtering with a GARCH model and then applying EVT, as suggested by McNeil (1999), is also investigated. This study identifies some of the practical issues in model fitting. It is shown that no single forecasting model is universally optimal and the choice will depend on the nature of the data. For this data series, the best approach was to augment the GARCH stochastic volatility models with EVT forecasts during periods where the first do not converge. Model performance is judged by the actual number of VaR and ES violations compared to the expected number. The expected number is taken as the number of return observations over the entire sample period, multiplied by 0.01 for 99% VaR and ES calculations.
AFRIKAANSE OPSOMMING: Akkurate beraming van Waarde op Risiko (Value at Risk) en Verwagte Tekort (Expected Shortfall) is krities vir die bestuur van ekstreme mark risiko’s. Hierdie risiko’s kom met klein waarskynlikheid voor, maar die finansiële impakte is potensieel groot. Tradisionele modelle om Waarde op Risiko en Verwagte Tekort te beraam, word ondersoek. In ooreenstemming met die algemene praktyk, word 99% 10 dag maatstawwe bereken. ‘n Omvattende teoretiese agtergrond word eers gegee en daarna word die modelle toegepas op die Africa Financials Index vanaf 29/01/1996 tot 30/04/2013. Die modelle wat oorweeg word sluit onafhanklike, identies verdeelde modelle en Veralgemeende Auto-regressiewe Voorwaardelike Heteroskedastiese (GARCH) stogastiese volatiliteitsmodelle in. Ekstreemwaarde Teorie modelle, wat spesifiek op ekstreme mark opbrengste fokus, word ook ondersoek. In hierdie verband word die Peaks Over Threshold (POT) benadering tot Ekstreemwaarde Teorie gevolg. Vir die berekening van Waarde op Risiko word verskillende skaleringsmetodes van een dag na tien dae oorweeg en die prestasie van elk word ge-evalueer. Die GARCH modelle konvergeer nie gedurende tydperke van ekstreme opbrengste nie. Gedurende hierdie tydperke, kan Ekstreemwaarde Teorie modelle gebruik word. As ‘n nuwe benadering oorweeg hierdie studie die aanvulling van die GARCH modelle met Ekstreemwaarde Teorie vooruitskattings. Die sogenaamde twee-stap prosedure wat voor-af filtrering met ‘n GARCH model behels, gevolg deur die toepassing van Ekstreemwaarde Teorie (soos voorgestel deur McNeil, 1999), word ook ondersoek. Hierdie studie identifiseer sommige van die praktiese probleme in model passing. Daar word gewys dat geen enkele vooruistkattingsmodel universeel optimaal is nie en die keuse van die model hang af van die aard van die data. Die beste benadering vir die data reeks wat in hierdie studie gebruik word, was om die GARCH stogastiese volatiliteitsmodelle met Ekstreemwaarde Teorie vooruitskattings aan te vul waar die voorafgenoemde nie konvergeer nie. Die prestasie van die modelle word beoordeel deur die werklike aantal Waarde op Risiko en Verwagte Tekort oortredings met die verwagte aantal te vergelyk. Die verwagte aantal word geneem as die aantal obrengste waargeneem oor die hele steekproefperiode, vermenigvuldig met 0.01 vir die 99% Waarde op Risiko en Verwagte Tekort berekeninge.
APA, Harvard, Vancouver, ISO, and other styles
12

Schäfer, Carsten. "Asset Dividing Appraisal Model (ADAM) - Direct Real Estate Investment Evaluation." Doctoral thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-191784.

Full text
Abstract:
The Asset Dividing Appraisal Model (ADAM) enables the appraisal of cash flows resulting from direct real estate investments. The model is an evaluation tool, which takes capital markets and the specific characteristics of real estate as an asset (heterogeneity, site-dependency, eternal land-yield, etc.) into consideration, while also considering different ownership approaches of real estate in the European Union. Thus, it contributes to the harmonization of capital markets and of direct real estate investment evaluation as intended by the "European Directive on Markets in Financial Instruments 2004/39/EC". ADAM is based on financial mathematical instruments and on the property valuation methods of different cultural areas. It combines continental European (Germ an Gross Rental-Method) and international (Discounted Cash Flow-Method) property valuation approaches. Although it is scientifically reasonable to take property valuation approaches into account, the aim of the model is not to valuate a property or to quantify an objective market value but to evaluate cash-flows resulting from direct real estate investments. A mathematical analysis based on empirical market data confirmed the validity of the methodology of the model. In the course of the analysis the major input variables that determine the results of the model and how the model reacts to marginal deviations of input data, were quantified. This was done using partial derivations and a simulation study. In Czech Republic a building isn't actually considered as a part of the underlying plot. Consequently, differing persons or institutions can be owner of the building, as of the appropriate plot. From 2014 on, a suitable reformation of the Czech Civil Code is supposed to cause a consolidation of real estate property. Czech law is going to be adjusted to German law, which considers plot and building as an economic entity. This consolidation of real estate could be an approach of the introduced model.
APA, Harvard, Vancouver, ISO, and other styles
13

Guedes, Maria do Carmo Vaz de Miranda. "Mathematical models in capital investment appraisal." Thesis, University of Warwick, 1988. http://wrap.warwick.ac.uk/107492/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Saboo, Jai Vardhan. "An investment analysis model using fuzzy set theory." Thesis, Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/50087.

Full text
Abstract:
Traditional methods for evaluating investments in state-of-the-art technology are sometimes found lacking in providing equitable recommendations for project selection. The major cause for this is the inability of these methods to handle adequately uncertainty and imprecision, and account for every aspect of the project, economic and non-economic, tangible and intangible. Fuzzy set theory provides an alternative to probability theory for handling uncertainty, while at the same time being able to handle imprecision. It also provides a means of closing the gap between the human thought process and the computer, by enabling the establishment of linguistic quantifiers to describe intangible attributes. Fuzzy set theory has been used successfully in other fields for aiding the decision-making process. The intention of this research has been the application of fuzzy set theory to aid investment decision making. The research has led to the development of a structured model, based on theoretical algorithms developed by Buckley and others. The model looks at a project from three different standpoints- economic, operational, and strategic. It provides recommendations by means of five different values for the project desirability, and results of two sensitivity analyses. The model is tested on a hypothetical case study. The end result is a model that can be used as a basis for promising future development of investment analysis models.
Master of Science
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
15

Rodriguez, Javier A. "Capacity expansion and capital investment decisions using the Economic Investment Time Model : a case oriented approach /." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07292009-090518/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wei, Yong, and 卫勇. "The real effects of S&P 500 Index additions: evidence from corporate investment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B4490681X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Moyen, Nathalie. "Financing investment with external funds." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0019/NQ46396.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Nan. "Mathematical Models and Numerical Methods for Pricing Options on Investment Projects under Uncertainties." Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/83866.

Full text
Abstract:
In this work, we focus on establishing partial differential equation (PDE) models for pricing flexibility options on investment projects under uncertainties and numerical methods for solving these models. we develop a finite difference method and an advanced fitted finite volume scheme and combine with an interior penalty method, as well as their convergence analyses, to solve the PDE and LCP models developed. The MATLAB program is for implementing testing the models of numerical algorithms developed.
APA, Harvard, Vancouver, ISO, and other styles
19

Roschat, Christina [Verfasser]. "Mathematical Analysis of Marine Ecosystem Models / Christina Roschat." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1111558604/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Keita, Sana. "Eulerian Droplet Models: Mathematical Analysis, Improvement and Applications." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37907.

Full text
Abstract:
The Eulerian description of dispersed two-phase flows results in a system of partial differential equations describing characteristics of the flow, namely volume fraction, density and velocity of the two phases, around any point in space over time. When pressure forces are neglected or a same pressure is considered for both phases, the resulting system is weakly hyperbolic and solutions may exhibit vacuum states (regions void of the dispersed phase) or localized unbounded singularities (delta shocks) that are not physically desirable. Therefore, it is crucial to find a physical way for preventing the formation of such undesirable solutions in weakly hyperbolic Eulerian two-phase flow models. This thesis focuses on the mathematical analysis of an Eulerian model for air- droplet flows, here called the Eulerian droplet model. This model can be seen as the sticky particle system with a source term and is successfully used for the prediction of droplet impingement and more recently for the prediction of particle flows in air- ways. However, this model includes only one-way momentum exchange coupling, and develops delta shocks and vacuum states. The main goal of this thesis is to improve this model, especially for the prevention of delta shocks and vacuum states, and the adjunction of two-way momentum exchange coupling. Using a characteristic analysis, the condition for loss of regularity of smooth solutions of the inviscid Burgers equation with a source term is established. The same condition applies to the droplet model. The Riemann problems associated, respectively, to the Burgers equation with a source term and the droplet model are solved. The characteristics are curves that tend asymptotically to straight lines. The existence of an entropic solution to the generalized Rankine-Hugoniot conditions is proven. Next, a way for preventing the formation of delta shocks and vacuum states in the model is identified and a new Eulerian droplet model is proposed. A new hierarchy of two-way coupling Eulerian models is derived. Each model is analyzed and numerical comparisons of the models are carried out. Finally, 2D computations of air-particle flows comparing the new Eulerian droplet model with the standard Eulerian droplet model are presented.
APA, Harvard, Vancouver, ISO, and other styles
21

Racheal, Cooper. "Analysis of Mathematical Models of the Human Lung." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3289.

Full text
Abstract:
The processes of lung ventilation and perfusion, diffusion, and gas transport make up the system of breathing and tissue oxygenation. Here, we present several mathematical formulations of the essential processes that contribute to breathing. These models aid in our understanding and analysis of this complex system and can be used to form treatments for patients on ventilators. With the right analysis and treatment options, patients can be helped and money can be saved. We conclude with the formulation of a mathematical model for the exchange of gasses in the body based on basic reaction kinetics.
APA, Harvard, Vancouver, ISO, and other styles
22

van, Schalkwyk Garth. "Mathematical models for optimal management of bank capital, reserves and liquidity." University of the Western Cape, 2019. http://hdl.handle.net/11394/6643.

Full text
Abstract:
Philosophiae Doctor - PhD
The aim of this study is to construct and propose continuous-time mathematical models for optimal management of bank capital, reserves and liquidity. This aim emanates from the global financial crisis of 2007 − 2009. In this regard and as a first task, our objective is to determine an optimal investment strategy for a commercial bank subject to capital requirements as prescribed by the Basel III Accord. In particular, the objective of the aforementioned problem is to maximize the expected return on the bank capital portfolio and minimize the variance of the terminal wealth. We apply classical tools from stochastic analysis to achieve the optimal strategy of a benchmark portfolio selection problem which minimizes the expected quadratic distance of the terminal risk capital reserves from a predefined benchmark. Secondly, the Basel Committee on Banking Supervision (BCBS) introduced strategies to protect banks from running out of liquidity. These measures included an increase of the minimum reserves that the bank ought to hold, in response to the global financial crisis. We propose a model to minimize risk for a bank by finding an appropriate mix of diversification, balanced against return on the portfolio. Thirdly and finally, in response to the financial crises, the Basel Committee on Banking Supervision (BCBS) designed a set of precautionary measures (known as Basel III) for liquidity imposed on banks and one of its purposes is to protect the economy from deteriorating. Recently, bank regulators wanted banks to depend on sources such as core deposits and long-term funding from small businesses and less on short-term wholesale funding.
APA, Harvard, Vancouver, ISO, and other styles
23

Chavanasporn, Walailuck. "Application of stochastic differential equations and real option theory in investment decision problems." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1691.

Full text
Abstract:
This thesis contains a discussion of four problems arising from the application of stochastic differential equations and real option theory to investment decision problems in a continuous-time framework. It is based on four papers written jointly with the author’s supervisor. In the first problem, we study an evolutionary stock market model in a continuous-time framework where uncertainty in dividends is produced by a single Wiener process. The model is an adaptation to a continuous-time framework of a discrete evolutionary stock market model developed by Evstigneev, Hens and Schenk-Hoppé (2006). We consider the case of fix-mix strategies and derive the stochastic differential equations which determine the evolution of the wealth processes of the various market players. The wealth dynamics for various initial set-ups of the market are simulated. In the second problem, we apply an entry-exit model in real option theory to study concessionary agreements between a private company and a state government to run a privatised business or project. The private company can choose the time to enter into the agreement and can also choose the time to exit the agreement if the project becomes unprofitable. An early termination of the agreement by the company might mean that it has to pay a penalty fee to the government. Optimal times for the company to enter and exit the agreement are calculated. The dynamics of the project are assumed to follow either a geometric mean reversion process or geometric Brownian motion. A comparative analysis is provided. Particular emphasis is given to the role of uncertainty and how uncertainty affects the average time that the concessionary agreement is active. The effect of uncertainty is studied by using Monte Carlo simulation. In the third problem, we study numerical methods for solving stochastic optimal control problems which are linear in the control. In particular, we investigate methods based on spline functions for solving the two-point boundary value problems that arise from the method of dynamic programming. In the general case, where only the value function and its first derivative are guaranteed to be continuous, piecewise quadratic polynomials are used in the solution. However, under certain conditions, the continuity of the second derivative is also guaranteed. In this case, piecewise cubic polynomials are used in the solution. We show how the computational time and memory requirements of the solution algorithm can be improved by effectively reducing the dimension of the problem. Numerical examples which demonstrate the effectiveness of our method are provided. Lastly, we study the situation where, by partial privatisation, a government gives a private company the opportunity to invest in a government-owned business. After payment of an initial instalment cost, the private company’s investments are assumed to be flexible within a range [0, k] while the investment in the business continues. We model the problem in a real option framework and use a geometric mean reversion process to describe the dynamics of the business. We use the method of dynamic programming to determine the optimal time for the private company to enter and pay the initial instalment cost as well as the optimal dynamic investment strategy that it follows afterwards. Since an analytic solution cannot be obtained for the dynamic programming equations, we use quadratic splines to obtain a numerical solution. Finally we determine the optimal degree of privatisation in our model from the perspective of the government.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Guangxi. "Sensitivity and uncertainty analysis of subsurface drainage design." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28529.

Full text
Abstract:
Literature on subsurface drainage theories, determination of drainage parameters, and analysis approaches of uncertainty was reviewed. Sensitivity analysis was carried out on drain spacing equations for steady state and nonsteady state, in homogeneous soils and in layered soils. It was found that drain spacing is very sensitive to the hydraulic conductivity, the drainage coefficient, and the design midspan water table height. Spacing is not sensitive to the depth of the impermeable layer and the drain radius. In transient state, spacing is extremely sensitive to the midspan water table heights if the water table fall is relatively small. In that case steady state theory will yield more reliable results and its use is recommended. Drain spacing is usually more sensitive to the hydraulic conductivity of the soil below the drains than to that of the soil above the drains. Therefore, it is desirable to take samples from deeper soil when measuring hydraulic conductivity. A new spacing formula was developed for two-layered soils and a special case of three-layered soils with drains at the interface of the top two layers. This equation was compared with the Kirkham equation. The new formula yields spacings close to the Kirkham equation if the hydraulic conductivity of the soil above the drains is relatively small; otherwise, it tends to give more accurate results. First and second order analysis methods were employed to analyze parameter uncertainty in subsurface drainage design. It was found that conventional design methods based on a deterministic framework may result in inadequate spacing due to the uncertainty involved. Uncertainty may be incorporated into practical design by using the simple equations and graphs presented in this research; the procedure was illustrated through an example. Conclusions were drawn from the present study and recommendations were made for future research.
Applied Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

El-Hachem, Maud. "Mathematical models of biological invasion." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/232864/1/Maud_El-Hachem_Thesis.pdf.

Full text
Abstract:
This thesis studies mathematical models of a population of cells invading the surrounding environment or another living population. A classical single-species model is reformulated using a moving boundary to track the position of the moving front of the invading population. The moving boundary is also used to separate two populations. Other models studied are coupled partial differential equations to describe the interaction of a population with another one. Different types of interaction are represented: the degradation of healthy skin by cancer and the growth of bone tissue on substrate.
APA, Harvard, Vancouver, ISO, and other styles
26

Guo, Miin Hong. "Differential earnings response coefficients to accounting information: The case of revisions of financial analysts' forecasts." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184712.

Full text
Abstract:
This dissertation extends previous studies on firms' differential earnings response coefficients. It provides further theoretical explanation and empirical evidence for the differential earnings response coefficients across firms and time. The empirical evidence found by Ball & Brown (1968) that the sign of unexpected earnings is positively correlated with the sign of market reactions is used to improve the control of measurement errors on investors' prior belief. Revisions of financial analysts' forecasts (FAFs) for firms' future earnings per share (EPS) are used as the event information. Both the impact of FAFs quality on investors' earnings belief revision and the mapping from EPS to security price are considered. Investors are assumed to be Bayesians who are homogeneous in belief. They use FAFs as information for making portfolio investment decisions. FAFs with smaller contemporary dispersion relative to the variance of investors' prior belief are considered to have higher quality. It is proposed that investors have stronger faith on the forecasts with higher information quality. A non-normative approach is used to map EPS into security prices. The market price over (expected) earnings ratio (P/E) is used as a linear approximation for the security valuation function. The major advantage of this approach is that non-earnings factors that have price effect on securities are implicitly controlled. The model predicts that ceteris paribus, the earnings response coefficient adjusted for the differential P/E is positively correlated with the quality of FAFs. Cross-sectional and time series samples of 1097 FAFs revisions from Standard & Poor's Earnings Forecaster in the years 1981 to 1985 are used in the empirical test. The empirical results are consistent with the theoretical implication. The quality of FAFs is found to be positively correlated with the P/E adjusted earnings response coefficient at one percent significance level. The results are robust across event day windows, the estimation periods for market model parameters and the price reaction measurements.
APA, Harvard, Vancouver, ISO, and other styles
27

廖智生 and Chi-sang Liu. "A study of optimal investment strategy for insurance portfolio." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31227636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Harris, David Wayne. "A degradation analysis methodology for maintenance tasks." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/24867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mercurio, Matthew Forrest. "Divider analysis of drainage divides delineated at the field scale." Virtual Press, 2004. http://liblink.bsu.edu/uhtbin/catkey/1306855.

Full text
Abstract:
Previous works have applied the Divider Method to the shapes of drainage divides as measured from maps. This study focuses on the shapes of several drainage divides measured in the field at very fine scale. These divides, chosen for their sharp crests, include portions of the Continental Divide in Colorado and badlands-type divides in Arizona, Wyoming, South Dakota, and Texas. The badlands type divides were delineated using a laser theodolite to collect data at decimeter point spacing, and the Continental Divide segments were delineated using pace and bearing at a constant point spacing of 30 meters. A GIS was used to store and visualize the divide data, and an automated divider analysis was performed for each of the 16 drainage divides.The Richardson plots produced for each of the drainage divide datasets were visually inspected for portions of linearity. Fractal dimensions (D) were calculated using linear regression techniques for each of the linear segments identified in the Richardson plots. Six of the plots exhibited two distinct segments of linearity, nine plots exhibited one segment, and one plot exhibited no segments of linearity. Residual analyses of the trend lines show that about half of the Richardson plot segments used to calculate D exhibit slight curvature. While these segments are not strictly linear, linear models and associated D values may still serve well as approximations to describe degree of divide wandering.Most (20 out of 21) of the dimensions derived from the Richardson plots for the drainage divides fall within the range from 1.01-1.07. The D values calculated for the Continental Divide range from 1.02-1.07. The dimensions calculated for the badlandtype divides were distributed evenly across the range of 1.01-1.06, with a single exceptional D value at 1.12. Only four of the divide D values fall within a range of 1.06–1.12, the range for D established for drainage divides in published map-based studies, despite the apparent dominance of erosion processes on the measured divides.
Department of Geology
APA, Harvard, Vancouver, ISO, and other styles
30

Beckham, Jon Regan. "Analysis of mathematical models of electrostatically deformed elastic bodies." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 169 p, 2008. http://proquest.umi.com/pqdweb?did=1475178561&sid=27&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tumanova, Natalija. "The Numerical Analysis of Nonlinear Mathematical Models on Graphs." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120720_121648-24321.

Full text
Abstract:
The numerical algorithms for non-stationary mathematical models in non-standard domains are investigated in the dissertation. The problem definition domain is represented by branching structures with conjugation equations considered at the branching points. The numerical analysis of the conjugation equations and non-classical boundary conditions distinguish considered problems among the classical problems of mathematical physics presented in the literature. The scope of the dissertation covers the investigation of stability and convergence of the numerical algorithms on branching structures with different conjugation equations, the construction and implementation of parallel algorithms, the investigation of the numerical schemes for the problems with nonlocal integral conditions. The modeling of the excitation of neuron and photoexcited carrier decay in a semiconductor, also the problem of the identification of nonlinear model are considered in the dissertation.
Disertacijoje nagrinėjami nestacionarių matematinių modelių nestandartinėse srityse skaitiniai sprendimo algoritmai. Uždavinio formulavimo sritis yra šakotosios strukturos (ang. branching structures), kurių išsišakojimo taškuose apibrežiami tvermės dėsniai. Tvermės dėsnių skaitinė analizė ir nestandartinių kraštinių sąlygų analizė skiria nagrinėjamus uždavinius nuo klasikinių aprašytų literatūroje matematinės fizikos uždaviniu. Disertacijoje suformuluoti uždaviniai apima skaitinių algoritmų šakotose struktūrose su skirtingais srautų tvermės dėsniais stabilumo ir konvergavimo tyrimą, lygiagrečiųjų algoritmų sudarymą ir taikymą, skaitinių schemų uždaviniams su nelokaliomis integralinėmis sąlygomis tyrimą. Disertacijoje sprendžiami taikomieji neurono sužadinimo ir impulso relaksacijos lazerio apšviestame puslaidininkyje uždaviniai, netiesinio modelio identifikavimo uždavinys.
APA, Harvard, Vancouver, ISO, and other styles
32

Chiang, T. "Mathematical and statistical models for the analysis of protein." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597600.

Full text
Abstract:
Protein interactions, both amongst themselves and with other molecules, are responsible for much of the work within the cellular machine. As the number of protein interaction data sets grow in number and in size, from experiments such as Yeast 2-Hybrid or Affinity Purification followed by Mass Spectrometry, there is a need to analyse the data both quantitatively and qualitatively. One area of research is determining how reliable a report of a protein interaction is – whether it could be reproduced if the experiment were repeated, or if it were tested using an independent assay. One might aim to score each reported interaction using a quantitative measure of reliability. Ultimately, protein interactions need to be addressed at the systems level where both the dynamic and functional nature of protein complexes and other types of interactions is ascertained. In this dissertation, I present two methodological developments that are useful towards elucidating the nature of protein interaction graphs in the model organism Saccharomyces cerevisiae. The first one aims to estimate the sensitivity and specificity of a protein interaction data set, and does that, as much as possible, by looking at the data set’s internal consistency and reproducibility. The second method aims to estimate the node degree distribution, using a multinomial model which is fit by maximum likelihood. In the development of the methods for the analysis of the protein interactions, computational tools were built in the statistical environment R. Such tools are necessary for the implementation of each analytic step, for rendering visualisations of intermediate and conclusive results, and for the construction of optimal work-flows so as to make our research reproducible and extensible. We have also included such a work-flow in this dissertation as well as the software engineering component of the research.
APA, Harvard, Vancouver, ISO, and other styles
33

De, la Harpe Alana. "A comparative analysis of mathematical models for HIV epidemiology." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96983.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: HIV infection is one of the world’s biggest health problems, with millions of people infected worldwide. HIV infects cells in the immune system, where it primarily targets CD4+ T helper cells and without treatment, the disease leads to the collapse of the host immune system and ultimately death. Mathematical models have been used extensively to study the epidemiology of HIV/AIDS. They have proven to be effective tools in studying the transmission dynamics of HIV. These models provide predictions that can help better our understanding of the epidemiological patterns of HIV, especially the mechanism associated with the spread of the disease. In this thesis we made a functional comparison between existing epidemiological models for HIV, with the focus of the comparison on the force of infection (FOI). The spread of infection is a crucial part of any infectious disease, as the dynamics of the disease depends greatly on the rate of transmission from an infectious individual to a susceptible individual. First, a review was done to see what deterministic epidemiological models exist. We found that many manuscripts do not provide the necessary information to recreate the authors’ results and only a small amount of the models could be simulated. The reason for this is mainly due to a lack of information or due to mistakes in the article. The models were divided into four categories for the analysis. On the basis of the FOI, we distinguished between frequency- or density-dependent transmission, and as a second criterion we distinguished models on the sexual activity of the AIDS group. Subsequently, the models were compared in terms of their FOI, within and between these classes. We showed that for larger populations, frequency-dependent transmission should be used. This is the case for HIV, where the disease is mainly spread through sexual contact. Inclusion of AIDS patients in the group of infectious individuals is important for the accuracy of transmission dynamics. More than half of the studies that were selected in the review assumed that AIDS patients are too sick to engage in risky sexual behaviour. We see that including AIDS patients in the infectious individuals class has a significant effect on the FOI when the value for the probability of transmission for an individual with AIDS is bigger than that of the other classes. The analysis shows that the FOI can vary depending on the parameter values and the assumptions made. Many models compress various parameter values into one, most often the transmission probability. Not showing the parameter values separately makes it difficult to understand how the FOI works, since there are unknown factors that have an influence. Improving the accuracy of the FOI can help us to better understand what factors influence it, and also produce more realistic results. Writing the probability of transmission as a function of the viral load can help to make the FOI more accurate and also help in the understanding of the effects that viral dynamics have on the population transmission dynamics.
AFRIKAANSE OPSOMMING: MIV-infeksie is een van die wêreld se grootste gesondheidsprobleme, met miljoene mense wat wêreldwyd geïnfekteer is. MIV infekteer selle in die immuunstelsel, waar dit hoofsaaklik CD4+ T-helperselle teiken. Sonder behandeling lei die siekte tot die ineenstorting van die gasheer se immuunstelsel en uiteindelik sy dood. Wiskundige modelle word breedvoerig gebruik om die epidemiologie van MIV/vigs te bestudeer. Die modelle is doeltreffende instrumente in die studie van die oordrag-dinamika van MIV. Hulle lewer voorspellings wat kan help om ons begrip van epidemiologiese patrone van MIV, veral die meganisme wat verband hou met die verspreiding van die siekte, te verbeter. In hierdie tesis het ons ‘n funksionele vergelyking tussen bestaande epidemiologiese modelle vir MIV gedoen, met die fokus van die vergelyking op die tempo van infeksie (TVI). Die verspreiding van infeksie is ‘n belangrike deel van enige aansteeklike siekte, aangesien die dinamika van die siekte grootliks afhang van die tempo van oordrag van ‘n aansteeklike persoon na ‘n vatbare persoon. ‘n Oorsig is gedoen om te sien watter kompartementele epidemiologiese modelle alreeds bestaan. Ons het gevind dat baie van die manuskripte nie die nodige inligting voorsien wat nodig is om die resultate van die skrywers te repliseer nie, en slegs ‘n klein hoeveelheid van die modelle kon gesimuleer word. Die rede hiervoor is hoofsaaklik as gevolg van ‘n gebrek aan inligting of van foute in die artikel. Die modelle is in vier kategorieë vir die analise verdeel. Op grond van die TVI het ons tussen frekwensie- of digtheidsafhanklike oordrag onderskei, en as ‘n tweede kriterium het ons die modelle op die seksuele aktiwiteit van die vigs-groep onderskei. Daarna is die modelle binne en tussen die klasse vergelyk in terme van hul TVIs. Daar is gewys dat frekwensie-afhanklike oordrag gebruik moet word vir groter bevolkings. Dit is die geval van MIV, waar die siekte hoofsaaklik versprei word deur seksuele kontak. Die insluiting van die vigs-pasiënte in die groep van aansteeklike individue is belangrik vir die akkuraatheid van die oordrag-dinamika van MIV. Meer as helfte van die uitgesoekte studies aanvaar dat vigs-pasiënte te siek is om betrokke te raak by riskante seksuele gedrag. Ons sien dat die insluiting van vigs-pasiënte in die groep van aansteeklike individue ‘n beduidende uitwerking op die TVI het wanneer die waarde van die waarskynlikheid van oordrag van ‘n individu met vigs groter is as dié van die ander klasse. Die analise toon dat die TVI kan wissel afhangende van die parameter waardes en die aannames wat gemaak is. Baie modelle voeg verskeie parameter waardes bymekaar vir die waarskynlikheid van oordrag. Wanneer die parameter waardes nie apart gewys word nie, is dit moeilik om die werking van die TVI te verstaan, want daar is onbekende faktore wat ‘n invloed op die TVI het. Die verbetering van die akkuraatheid van die TVI kan ons help om die faktore wat dit beïnvloed beter te verstaan, en dit kan ook help om meer realistiese resultate te produseer. Om die waarskynlikheid van oordrag as ‘n funksie van die viruslading te skryf kan help om die TVI meer akkuraat te maak en dit kan ook help om die effek wat virale dinamika op die bevolkingsoordrag-dinamika het, beter te verstaan.
APA, Harvard, Vancouver, ISO, and other styles
34

Serkov, S. K. "Asymptotic analysis of mathematical models for elastic composite media." Thesis, University of Bath, 1998. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, M. E. M. "Mathematical models of the carding process." Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249543.

Full text
Abstract:
Carding is an essential pre-spinning process whereby masses of dirty tufted fibres are cleaned, disentangled and refined into a smooth coherent web. Research and development in this `low-technology' industry have hitherto depended on empirical evidence. In collaboration with the School of Textile Industries at the University of Leeds, a mathematical theory has been developed that describes the passage of fibres through the carding machine. The fibre dynamics in the carding machine are posed, modelled and simulated by three distinct physical problems: the journey of a single fibre, the extraction of fibres from a tuft or tufts and many interconnecting, entangled fibres. A description of the life of a single fibre is given as it is transported through the carding machine. Many fibres are sparsely distributed across machine surfaces, therefore interactions with other neighbouring fibres, either hydrodynamically or by frictional contact points, can be neglected. The aerodynamic forces overwhelm the fibre's ability to retain its crimp or natural curvature, and so the fibre is treated as an inextensible string. Two machine topologies are studied in detail, thin annular regions with hooked surfaces and the nip region between two rotating drums. The theoretical simulations suggest that fibres do not transfer between carding surfaces in annular machine geometries. In contrast to current carding theories, which are speculative, a novel explanation is developed for fibre transfer between the rotating drums. The mathematical simulations describe two distinct mechanisms: strong transferral forces between the taker-in and cylinder and a weaker mechanism between cylinder and doffer. Most fibres enter the carding machine connected to and entangled with other fibres. Fibres are teased from their neighbours and in the case where their neighbours form a tuft, which is a cohesive and resistive fibre structure, a model has been developed to understand how a tuft is opened and broken down during the carding process. Hook-fibre-tuft competitions are modelled in detail: a single fibre extracted from a tuft by a hook and diverging hook-entrained tufts with many interconnecting fibres. Consequently, for each scenario once fibres have been completely or partially extracted, estimates can be made as to the degree to which a tuft has been opened-up. Finally, a continuum approach is used to simulate many interconnected, entangled fibre-tuft populations, focusing in particular on their deformations. A novel approach describes this medium by density, velocity, directionality, alignment and entanglement. The materials responds to stress as an isotropic or transversely isotropic medium dependent on the degree of alignment. Additionally, the material's response to stress is a function of the degree of entanglement which we describe by using braid theory. Analytical solutions are found for elongational and shearing flows, and these compare very well with experiments for certain parameter regimes.
APA, Harvard, Vancouver, ISO, and other styles
36

Crawford, David Michael. "Analysis of biological pattern formation models." Thesis, University of Oxford, 1989. http://ora.ox.ac.uk/objects/uuid:aaa19d3b-c930-4cfa-adc6-8ea498fa5695.

Full text
Abstract:
In this thesis we examine mathematical models which have been suggested as possibile mechanisms for forming certain biological patterns. We analyse them in detail attempting to produce the requisite patterns both analytically and numerically. A reaction diffusion system in two spatial dimensions with anisotropic diffusion is examined in detail and the results compared with certain snakeskin patterns. We examine two other variants to the standard reaction diffusion system: a system where the reaction kinetics and the diffusion coefficients depend upon the cell density suggested as a possible model for the segmentation sequence in Drosophila and a system where the model parameters have one dimensional spatial gradients. We also analyse a model derived from known cellular processes used to model the branching behaviour in bryozoans and show that, in one dimension, such a model can, in theory, give all the required solution behaviour. A genetic switch model for pattern elements on butterfly wings is also briefly examined to obtain expressions for the solution behaviour under coldshock.
APA, Harvard, Vancouver, ISO, and other styles
37

Hakami, Amir. "Direct sensitivity analysis in air quality models." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180202/unrestricted/hakami%5Famir%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Akileh, Aiman R. "Elastic-plastic analysis of axisymmetrically loaded isotropic circular and annular plates undergoing large deflections." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3559.

Full text
Abstract:
The concept of load analogy is used in the elastic and elastic-plastic analysis of isotropic circular and annular plates undergoing moderately large deflection. The effects of the nonlinear terms of lateral displacement and the plastic strains are considered as additional fictitious lateral loads, edge moments, and in-plane forces acting on the plate. The solution of an elastic or elastic-plastic Von Karman type plate is hence reduced to a set of two equivalent elastic plate problems with small displacements, namely, a plane problem in elasticity and a linear elastic plate bending problem. The method of finite element is employed to solve the plane stress problem. The large deflection solutions are then obtained by utilizing the solutions of the linear bending problems through an iterative numerical scheme. The flow theory of plasticity incorporating a Von Mises layer yield criterion and the Prandtl-Reuss associated flow rule for strain hardening materials is employed in this approach.
APA, Harvard, Vancouver, ISO, and other styles
39

Galagedera, Don U. A. "Investment performance appraisal and asset pricing models." Monash University, Dept. of Econometrics and Business Statistics, 2003. http://arrow.monash.edu.au/hdl/1959.1/5780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sood, Premlata Khetan. "Profit sharing, unemployment, and inflation in Canada : a simulation analysis." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34459.

Full text
Abstract:
The thesis examines the impact of a partial switch to a share system in Canada on unemployment and inflation. Simulations with an independent Canadian macro model and Canadian data for the period 1973-1983 show that profit sharing will not always resolve unemployment and inflation, as claimed by Martin Weitzman. Some combinations of the share parameters resolve them, while others aggravate them. Thus, the combinations of the share parameters play a key role in terms of impact of the profit sharing on unemployment and inflation.
APA, Harvard, Vancouver, ISO, and other styles
41

Khalilzadeh, Amir Hossein. "Variance Dependent Pricing Kernels in GARCH Models." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

"Multi-period cooperative investment game with risk." 2008. http://library.cuhk.edu.hk/record=b5893772.

Full text
Abstract:
Zhou, Ying.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 89-91).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.2 --- Aims and objectives --- p.2
Chapter 1.3 --- Outline of the thesis --- p.3
Chapter 2 --- Literature Review --- p.6
Chapter 2.1 --- Portfolio Optimization Problems --- p.6
Chapter 2.2 --- Cooperative Games and Cooperative Investment Models --- p.8
Chapter 2.2.1 --- Linear Production Games And Basic Concepts of Co- operative Game Theory --- p.9
Chapter 2.2.2 --- Investment Models Using Linear Production Games --- p.12
Chapter 3 --- Multi-period Cooperative Investment Games: Basic Model --- p.15
Chapter 3.1 --- Cooperative Investment Game under Deterministic Case --- p.16
Chapter 3.2 --- Cooperative Investment Game with Stochastic Return --- p.18
Chapter 3.2.1 --- Basic Assumptions --- p.18
Chapter 3.2.2 --- Choose the Proper Risk Measure --- p.20
Chapter 3.2.3 --- One Period Case --- p.21
Chapter 3.2.4 --- Multi-Period Case --- p.23
Chapter 4 --- The Two-Period Investment Game under L∞ Risk Measure --- p.26
Chapter 4.1 --- The Two Period Model --- p.26
Chapter 4.2 --- The Algorithm --- p.35
Chapter 4.3 --- Optimal Solution of the Dual --- p.41
Chapter 5 --- Primal Solution and Stability of the Core under Two-Period Case --- p.43
Chapter 5.1 --- Direct Results --- p.44
Chapter 5.2 --- Find the Optimal Solutions of the Primal Problem --- p.46
Chapter 5.3 --- Relationship between A and the Core --- p.53
Chapter 5.3.1 --- Tracing out the efficient frontier --- p.54
Chapter 6 --- Multi-Period Case --- p.63
Chapter 6.1 --- Common Risk Price and the Negotiation Process with Concave Risk Utility --- p.64
Chapter 6.1.1 --- Existence of Common Risk Price and Core --- p.65
Chapter 6.1.2 --- Negotiation Process --- p.68
Chapter 6.2 --- Modified Simplex Method --- p.71
Chapter 7 --- Other Risk Measures --- p.76
Chapter 7.1 --- The Downside Risk Measure --- p.76
Chapter 7.1.1 --- Discrete (Finite Scenario) Distributions --- p.78
Chapter 7.1.2 --- General Distributions --- p.81
Chapter 7.2 --- Coherent Risk Measure and CVaR --- p.83
Chapter 8 --- Conclusion and Future Work --- p.87
APA, Harvard, Vancouver, ISO, and other styles
43

"Models of multi-period cooperative re-investment games." 2010. http://library.cuhk.edu.hk/record=b5894494.

Full text
Abstract:
Liu, Weiyang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (p. 111-113).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction and Literature Review --- p.1
Chapter 1.1 --- Introduction --- p.1
Chapter 1.1.1 --- Background and Motivating examples --- p.2
Chapter 1.1.2 --- Basic Concepts --- p.4
Chapter 1.1.3 --- Outline of the thesis --- p.6
Chapter 1.2 --- Literature Review --- p.8
Chapter 2 --- Multi-period Cooperative Re-investment Games: The Basic Model --- p.11
Chapter 2.1 --- Basic settings and assumptions --- p.11
Chapter 2.2 --- The problem --- p.13
Chapter 3 --- Three sub-models and the allocation rule of Sub-Model III --- p.17
Chapter 3.1 --- Three possible sub-models of the basic model --- p.17
Chapter 3.1.1 --- Sub-model I --- p.17
Chapter 3.1.2 --- Sub-model II --- p.18
Chapter 3.1.3 --- Sub-model III --- p.19
Chapter 3.2 --- The allocation rule of Sub-model III --- p.19
Chapter 4 --- A two period example of the revised basic model --- p.25
Chapter 4.1 --- The two period example with two projects --- p.25
Chapter 4.2 --- The algorithm for the dual problem --- p.29
Chapter 5 --- Extensions of the Basic Model --- p.35
Chapter 5.1 --- The model with stochastic budgets --- p.36
Chapter 5.2 --- The core of the model with stochastic budgets --- p.39
Chapter 5.3 --- An example: the two-period case of models with stochastic bud- gets and an algorithm for the dual problem --- p.46
Chapter 5.4 --- An interesting marginal effect --- p.52
Chapter 5.5 --- "A Model with stochastic project prices, stochastic returns and stochastic budgets" --- p.54
Chapter 6 --- Multi-period Re-investment Model with risks --- p.58
Chapter 6.1 --- The Model with l1 risk measure --- p.58
Chapter 6.2 --- The Model with risk measure --- p.66
Chapter 7 --- Numerical Tests --- p.70
Chapter 7.1 --- The affects from uncertainty changes --- p.71
Chapter 7.2 --- The affects from budget changes --- p.71
Chapter 7.3 --- The affects from the budget changes of only one group --- p.71
Chapter 8 --- Conclusive Remarks --- p.77
Chapter A --- Original Data and Analysis for Section 7.1 (Partial) --- p.79
Chapter B --- Data Analysis for Section 7.2 (Partial) --- p.95
Chapter C --- Data Analysis for Section 7.3 (Partial) --- p.98
APA, Harvard, Vancouver, ISO, and other styles
44

Shen, Weiwei. "Portfolio optimization with transaction costs and capital gain taxes." Thesis, 2014. https://doi.org/10.7916/D8PK0D76.

Full text
Abstract:
This thesis is concerned with a new computational study of optimal investment decisions with proportional transaction costs or capital gain taxes over multiple periods. The decisions are studied for investors who have access to a risk-free asset and multiple risky assets to maximize the expected utility of terminal wealth. The risky asset returns are modeled by a discrete-time multivariate geometric Brownian motion. As in the model in Davis and Norman (1990) and Lynch and Tan (2010), the transaction cost is modeled to be proportional to the amount of transferred wealth. As in the model in Dammon et al. (2001) and Dammon et al. (2004), the taxation rule is linear, uses the weighted average tax basis price, and allows an immediate tax credit for a capital loss. For the transaction costs problem, we compute both lower and upper bounds for optimal solutions. We propose three trading strategies to obtain the lower bounds: the hyper-sphere strategy (termed HS); the hyper-cube strategy (termed HC); and the value function optimization strategy (termed VF). The first two strategies parameterize the associated no-trading region by a hyper-sphere and a hyper-cube, respectively. The third strategy relies on approximate value functions used in an approximate dynamic programming algorithm. In order to examine their quality, we compute the upper bounds by a modified gradient-based duality method (termed MG). We apply the new methods across various parameter sets and compare their results with those from the methods in Brown and Smith (2011). We are able to numerically solve problems up to the size of 20 risky assets and a 40-year-long horizon. Compared with their methods, the three novel lower bound methods can achieve higher utilities. HS and HC are about one order of magnitude faster in computation times. The upper bounds from MG are tighter in various examples. The new duality gap is ten times narrower than the one in Brown and Smith (2011) in the best case. In addition, I illustrate how the no-trading region deforms when it reaches the borrowing constraint boundary in state space. To the best of our knowledge, this is the first study of the deformation in no-trading region shape resulted from the borrowing constraint. In particular, we demonstrate how the rectangular no-trading region generated in uncorrelated risky asset cases (see, e.g., Lynch and Tan, 2010; Goodman and Ostrov, 2010) transforms into a non-convex region due to the binding of the constraint.For the capital gain taxes problem, we allow wash sales and rule out "shorting against the box" by imposing nonnegativity on portfolio positions. In order to produce accurate results, we sample the risky asset returns from its continuous distribution directly, leading to a dynamic program with continuous decision and state spaces. We provide ingredients of effective error control in an approximate dynamic programming solution method. Accordingly, the relative numerical error in approximating value functions by a polynomial basis function is about 10E-5 measured by the l1 norm and about 10E-10 by the l2 norm. Through highly accurate numerical solutions and transformed state variables, we are able to explain the optimal trades through an associated no-trading region. We numerically show in the new state space the no-trading region has a similar shape and parameter sensitivity to that of the transaction costs problem in Muthuraman and Kumar (2006) and Lynch and Tan (2010). Our computational results elucidate the impact on the no-trading region from volatilities, tax rates, risk aversion of investors, and correlations among risky assets. To the best of our knowledge, this is the first time showing no-trading region of the capital gain taxes problem has such similar traits to that of the transaction costs problem. We also compute lower and upper bounds for the problem. To obtain the lower bounds we propose five novel trading strategies: the value function optimization (VF) strategy from approximate dynamic programming; the myopic optimization and the rolling buy-and-hold heuristic strategies (MO and RBH); and the realized Merton's and hyper-cube strategies (RM and HC) from policy approximation. In order to examine their performance, we develop two upper bound methods (VUB and GUB) based on the duality technique in Brown et al. (2009) and Brown and Smith (2011). Across various sets of parameters, duality gaps between lower and upper bounds are smaller than 3% in most examples. We are able to solve the problem up to the size of 20 risky assets and a 30-year-long horizon.
APA, Harvard, Vancouver, ISO, and other styles
45

Azimi-Zonooz, Aydeen. "A power comparison of mutual fund timing and selectivity models under varying portfolio and market conditions." Thesis, 1992. http://hdl.handle.net/1957/36490.

Full text
Abstract:
The goal of this study is to test the accuracy of various mutual fund timing and selectivity models under a range of portfolio managerial skills and varying market conditions. Portfolio returns in a variety of skill environments are generated using a simulation procedure. The generated portfolio returns are based on the historical patterns and time series behavior of a market portfolio proxy and on a sample of mutual funds. The proposed timing and selectivity portfolio returns mimic the activities of actual mutual fund managers who possess varying degrees of skill. Using the constructed portfolio returns, various performance models are compared in terms of their power to detect timing and selectivity abilities, by means of an iterative simulation procedure. The frequency of errors in rejecting the null hypotheses of no market timing and no selectivity abilities shape the analyses between the models for power comparison. The results indicate that time varying beta models of Lockwood- Kadiyala and Bhattacharya-Pfleiderer rank highest in tests of both market timing and selectivity. The Jensen performance model achieves the best results in selectivity environments in which managers do not possess timing skill. The Henriksson-Merton model performs most highly in tests of market timing in which managers lack timing skill. The study also investigates the effects of heteroskedasticity on the performance models. The results of analysis before and after model correction for nonconstant error term variance (heteroskedasticity) for specific performance methodologies do not follow a consistent pattern.
Graduation date: 1992
APA, Harvard, Vancouver, ISO, and other styles
46

"Dynamic portfolio analysis: mean-variance formulation and iterative parametric dynamic programming." 1998. http://library.cuhk.edu.hk/record=b5889737.

Full text
Abstract:
by Wan-Lung Ng.
Thesis submitted in: November 1997.
On added t.p.: January 19, 1998.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 114-119).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Overview --- p.1
Chapter 1.2 --- Organization Outline --- p.5
Chapter 2 --- Literature Review --- p.7
Chapter 2.1 --- Modern Portfolio Theory --- p.7
Chapter 2.1.1 --- Mean-Variance Model --- p.9
Chapter 2.1.2 --- Setting-up the relationship between the portfolio and its component securities --- p.11
Chapter 2.1.3 --- Identifying the efficient frontier --- p.12
Chapter 2.1.4 --- Selecting the best compromised portfolio --- p.13
Chapter 2.2 --- Stochastic Optimal Control --- p.17
Chapter 2.2.1 --- Dynamic Programming --- p.18
Chapter 2.2.2 --- Dynamic Programming Decomposition --- p.21
Chapter 3 --- Multiple Period Portfolio Analysis --- p.23
Chapter 3.1 --- Maximization of Multi-period Consumptions --- p.24
Chapter 3.2 --- Maximization of Utility of Terminal Wealth --- p.29
Chapter 3.3 --- Maximization of Expected Average Compounded Return --- p.33
Chapter 3.4 --- Minimization of Time to Reach Target --- p.35
Chapter 3.5 --- Goal-Seeking Investment Model --- p.37
Chapter 4 --- Multi-period Mean-Variance Analysis with a Riskless Asset --- p.40
Chapter 4.1 --- Motivation --- p.40
Chapter 4.2 --- Dynamic Mean-Variance Analysis Formulation --- p.43
Chapter 4.3 --- Auxiliary Problem Formulation --- p.45
Chapter 4.4 --- Efficient Frontier in Multi-period Portfolio Selection --- p.53
Chapter 4.5 --- Obseravtions --- p.58
Chapter 4.6 --- Solution Algorithm for Problem E (w) --- p.62
Chapter 4.7 --- Illstrative Examples --- p.63
Chapter 4.8 --- Verification with Single-period Efficient Frontier --- p.72
Chapter 4.9 --- Generalization to Cases with Nonlinear Utility Function of E (xT) and Var (xT) --- p.75
Chapter 5 --- Dynamic Portfolio Selection without Risk-less Assets --- p.84
Chapter 5.1 --- Construction of Auxiliuary Problem --- p.88
Chapter 5.2 --- Analytical Solution for Efficient Frontier --- p.89
Chapter 5.3 --- Reduction to Investment Situations with One Risk-free Asset --- p.101
Chapter 5.4 --- "Multi-period Portfolio Selection via Maximizing Utility function U(E {xT),Var (xT))" --- p.103
Chapter 6 --- Conclusions and Recommendations --- p.108
Chapter 6.1 --- Summaries and Achievements --- p.108
Chapter 6.2 --- Future Studies --- p.110
Chapter 6.2.1 --- Constrained Investment Situations --- p.110
Chapter 6.2.2 --- Including Higher Moments --- p.111
APA, Harvard, Vancouver, ISO, and other styles
47

"Online banking investment decision with real option pricing analysis." 2001. http://library.cuhk.edu.hk/record=b5890704.

Full text
Abstract:
Chu Chun-fai, Carlin.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 69-73).
Abstracts in English and Chinese.
Chapter Part I: --- INTRODUCTION --- p.1
Chapter Part II: --- LITERATURE REVIEW --- p.4
Chapter - --- Financial option-pricing theory
Chapter - --- Real option-pricing theory
Chapter - --- Real option-pricing theory in Management Information System Area
Chapter Part III: --- CASE BACKGROUND --- p.14
Chapter - --- Case Background
Chapter - --- Availability of online banking services in Hong Kong
Chapter - --- Online banking investment in the Hong Kong Chinese Bank
Chapter Part IV: --- RESEARCH MODEL --- p.19
Chapter - --- Research model
Chapter - --- Modelling of the optimal timing problem of HKCB
Chapter - --- Justification of geometric Brownian motion assumption for using Black-Scholes formula
Chapter Part V : --- DATA COLLECTION --- p.30
Chapter Part VI: --- ANALYSIS RESULT --- p.35
Chapter - --- Analysis result
Chapter - --- Sensitivity analysis on the selected parameters
Chapter - --- Suggested investment timing
Chapter Part VII: --- DISCUSSIONS AND IMPLICATIONS --- p.44
Chapter - --- Result discussion
Chapter - --- Implications for researchers
Chapter - --- Implications for practitioners
Chapter Part VIII: --- LIMITATIONS AND CONTRIBUTIONS --- p.48
Chapter - --- Limitation on data collection process
Chapter - --- Limitations on Black-Scholes model
Chapter - --- Contributions
APPENDIX
Appendix A -Limitation of traditional Discounted Cash Flow analysis --- p.51
Appendix B -Banks services available to the customers --- p.54
Appendix C -Sample path of a Geometric Brownian Motion --- p.56
Appendix D -Discounted Cash Flows analysis of immediate entry of online banking investment --- p.57
Appendix E -Black-Scholes formula and its interpretation for non-traded --- p.61
Appendix F -Questionnaire for Online banking investment --- p.64
Appendix G -Availability of online banking services in May 2001 --- p.67
Appendix H -Sensitivity analysis on the number of initial usage --- p.68
Appendix I -Reference List --- p.69
APA, Harvard, Vancouver, ISO, and other styles
48

"Value-at-risk analysis of portfolio return model using independent component analysis and Gaussian mixture model." 2004. http://library.cuhk.edu.hk/record=b5892248.

Full text
Abstract:
Sen Sui.
Thesis submitted in: August 2003.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 88-92).
Abstracts in English and Chinese.
Abstract --- p.ii
Acknowledgement --- p.iv
Dedication --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation and Objective --- p.1
Chapter 1.2 --- Contributions --- p.4
Chapter 1.3 --- Thesis Organization --- p.5
Chapter 2 --- Background of Risk Management --- p.7
Chapter 2.1 --- Measuring Return --- p.8
Chapter 2.2 --- Objectives of Risk Measurement --- p.11
Chapter 2.3 --- Simple Statistics for Measurement of Risk --- p.15
Chapter 2.4 --- Methods for Value-at-Risk Measurement --- p.16
Chapter 2.5 --- Conditional VaR --- p.18
Chapter 2.6 --- Portfolio VaR Methods --- p.18
Chapter 2.7 --- Coherent Risk Measure --- p.20
Chapter 2.8 --- Summary --- p.22
Chapter 3 --- Selection of Independent Factors for VaR Computation --- p.23
Chapter 3.1 --- Mixture Convolution Approach Restated --- p.24
Chapter 3.2 --- Procedure for Selection and Evaluation --- p.26
Chapter 3.2.1 --- Data Preparation --- p.26
Chapter 3.2.2 --- ICA Using JADE --- p.27
Chapter 3.2.3 --- Factor Statistics --- p.28
Chapter 3.2.4 --- Factor Selection --- p.29
Chapter 3.2.5 --- Reconstruction and VaR Computation --- p.30
Chapter 3.3 --- Result and Comparison --- p.30
Chapter 3.4 --- Problem of Using Kurtosis and Skewness --- p.40
Chapter 3.5 --- Summary --- p.43
Chapter 4 --- Mixture of Gaussians and Value-at-Risk Computation --- p.45
Chapter 4.1 --- Complexity of VaR Computation --- p.45
Chapter 4.1.1 --- Factor Selection Criteria and Convolution Complexity --- p.46
Chapter 4.1.2 --- Sensitivity of VaR Estimation to Gaussian Components --- p.47
Chapter 4.2 --- Gaussian Mixture Model --- p.52
Chapter 4.2.1 --- Concept and Justification --- p.52
Chapter 4.2.2 --- Formulation and Method --- p.53
Chapter 4.2.3 --- Result and Evaluation of Fitness --- p.55
Chapter 4.2.4 --- Evaluation of Fitness using Z-Transform --- p.56
Chapter 4.2.5 --- Evaluation of Fitness using VaR --- p.58
Chapter 4.3 --- VaR Estimation using Convoluted Mixtures --- p.60
Chapter 4.3.1 --- Portfolio Returns by Convolution --- p.61
Chapter 4.3.2 --- VaR Estimation of Portfolio Returns --- p.64
Chapter 4.3.3 --- Result and Analysis --- p.64
Chapter 4.4 --- Summary --- p.68
Chapter 5 --- VaR for Portfolio Optimization and Management --- p.69
Chapter 5.1 --- Review of Concepts and Methods --- p.69
Chapter 5.2 --- Portfolio Optimization Using VaR --- p.72
Chapter 5.3 --- Contribution of the VaR by ICA/GMM --- p.76
Chapter 5.4 --- Summary --- p.79
Chapter 6 --- Conclusion --- p.80
Chapter 6.1 --- Future Work --- p.82
Chapter A --- Independent Component Analysis --- p.83
Chapter B --- Gaussian Mixture Model --- p.85
Bibliography --- p.88
APA, Harvard, Vancouver, ISO, and other styles
49

Drienko, Jozef. "Testing asset pricing models using market expectations." Phd thesis, 2013. http://hdl.handle.net/1885/150890.

Full text
Abstract:
We investigate the use of market-based expectations to test the CAPM and the conditional CAPM using a generalised method of moments framework. This method is valid under much weaker distributional assumptions and provides the procedure with robustness that commonly employed tests lack. Expected returns are derived from projected price levels of individual securities that are supplied in the form of twelve{u00AD}month consensus (median) target price forecasts. The annual forecasts, updated each month, are combined with dividend expectations to calculate the necessary time series of continuous expected returns. As such, we are able to avoid the use of instrumental variable models that, we argue, are likely to suffer from overfitting data concerns. In fact, we find that expected returns estimated from analyst data, while certainly not perfect, provide a better fit in comparison to the existing instrumental variable models. In considering the testable implication of the model via a vector of orthogonality conditions, we find that using market-based expectations to test the CAPM directly leads to a rejection. Overall, the CAPM tends to underestimate returns, producing pricing errors that are large, positive and statistically significant. Our results link in with the existing asset pricing literature that also attempts to apply forward-looking data derived from analyst forecasts. The conditional CAPM, a model that benefits from time-varying parameters that are updateable in accordance with changes to the information set, is also rejected. Market based expectations are used to parameterise the marginal rate of substitution. While the results of our tests of the conditional CAPM indicate that the model is able to perform better than those reported in previous studies, it continues to consistently underestimate returns in contravention to the null hypothesis. This indicates that the market, as the sole risk factor of the model, is not enough to explain the variation of returns across assets. While beta-risk may be priced, the CAPM may not account for all priced risk factors.
APA, Harvard, Vancouver, ISO, and other styles
50

"Multi-period portfolio optimization." Thesis, 2009. http://library.cuhk.edu.hk/record=b6074946.

Full text
Abstract:
In this thesis, we focus our study on the multi-period portfolio selection problems with different investment conditions. We first analyze the mean-variance multi-period portfolio selection problem with stochastic investment horizon. It is often the case that some unexpected endogenous and exogenous events may force an investor to terminate her investment and leave the market. We give the assumption that the uncertain investment horizon follows a given stochastic process. By making use of the embedding technique of Li and Ng (2000), the original nonseparable problem can be solved by solving an auxiliary problem. With the given assumption, the auxiliary problem can be translated into one with deterministic exit time and solved by dynamic programming. Furthermore, we consider the mean-variance formulation of multi-period portfolio optimization for asset-liability management with an exogenous uncertain investment horizon. Secondly, we consider the multi-period portfolio selection problem in an incomplete market with no short-selling or transaction cost constraint. We assume that the sample space is finite, and the number of possible security price vector transitions is equal to the number of securities. By introducing a family of auxiliary markets, we connect the primal problem to a set of optimization problems without no short-selling or without transaction costs constraint. In the no short-selling case, the auxiliary problem can be solved by using the martingale method of Pliska (1986), and the optimal terminal wealth of the original constrained problem can be derived. In the transaction cost case, we find that the dual problem, which is to minimize the optimal value for the set of optimization problems, is equivalent to the primal problem, when the primal problem has a solution, and we thus characterize the optimal solution accordingly.
Yi, Lan.
Adviser: Duan Li.
Source: Dissertation Abstracts International, Volume: 72-11, Section: A, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 133-139).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest dissertations and theses, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography