Dissertations / Theses on the topic 'Variance model'

To see the other types of publications on this topic, follow the link: Variance model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Variance model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xiao, Yan. "Evaluating Variance of the Model Credibility Index." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/math_theses/39.

Full text
Abstract:
Model credibility index is defined to be a sample size under which the power of rejection equals 0.5. It applies goodness-of-fit testing thinking and uses a one-number summary statistic as an assessment tool in a false model world. The estimation of the model credibility index involves a bootstrap resampling technique. To assess the consistency of the estimator of model credibility index, we instead study the variance of the power achieved at a fixed sample size. An improved subsampling method is proposed to obtain an unbiased estimator of the variance of power. We present two examples to interpret the mechanics of building model credibility index and estimate its error in model selection. One example is two-way independent model by Pearson Chi-square test, and another example is multi-dimensional logistic regression model using likelihood ratio test.
APA, Harvard, Vancouver, ISO, and other styles
2

Prosser, Robert James. "Robustness of multivariate mixed model ANOVA." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25511.

Full text
Abstract:
In experimental or quasi-experimental studies in which a repeated measures design is used, it is common to obtain scores on several dependent variables on each measurement occasion. Multivariate mixed model (MMM) analysis of variance (Thomas, 1983) is a recently developed alternative to the MANOVA procedure (Bock, 1975; Timm, 1980) for testing multivariate hypotheses concerning effects of a repeated factor (called occasions in this study) and interaction between repeated and non-repeated factors (termed group-by-occasion interaction here). If a condition derived by Thomas (1983), multivariate multi-sample sphericity (MMS), regarding the equality and structure of orthonormalized population covariance matrices is satisfied (given multivariate normality and independence for distributions of subjects' scores), valid likelihood-ratio MMM tests of group-by-occasion interaction and occasions hypotheses are possible. To date, no information has been available concerning actual (empirical) levels of significance of such tests when the MMS condition is violated. This study was conducted to begin to provide such information. Departure from the MMS condition can be classified into three types— termed departures of types A, B, and C respectively: (A) the covariance matrix for population ℊ (ℊ = 1,...G), when orthonormalized, has an equal-diagonal-block form but the resulting matrix for population ℊ is unequal to the resulting matrix for population ℊ' (ℊ ≠ ℊ'); (B) the G populations' orthonormalized covariance matrices are equal, but the matrix common to the populations does not have equal-diagonal-block structure; or (C) one or more populations has an orthonormalized covariance matrix which does not have equal-diagonal-block structure and two or more populations have unequal orthonormalized matrices. In this study, Monte Carlo procedures were used to examine the effect of each type of violation in turn on the Type I error rates of multivariate mixed model tests of group-by-occasion interaction and occasions null hypotheses. For each form of violation, experiments modelling several levels of severity were simulated. In these experiments: (a) the number of measured variables was two; (b) the number of measurement occasions was three; (c) the number of populations sampled was two or three; (d) the ratio of average sample size to number of measured variables was six or 12; and (e) the sample size ratios were 1:1 and 1:2 when G was two, and 1:1:1 and 1:1:2 when G was three. In experiments modelling violations of types A and C, the effects of negative and positive sampling were studied. When type A violations were modelled and samples were equal in size, actual Type I error rates did not differ significantly from nominal levels for tests of either hypothesis except under the most severe level of violation. In type A experiments using unequal groups in which the largest sample was drawn from the population whose orthogonalized covariance matrix has the smallest determinant (negative sampling), actual Type I error rates were significantly higher than nominal rates for tests of both hypotheses and for all levels of violation. In contrast, empirical levels of significance were significantly lower than nominal rates in type A experiments in which the largest sample was drawn from the population whose orthonormalized covariance matrix had the largest determinant (positive sampling). Tests of both hypotheses tended to be liberal in experiments which modelled type B violations. No strong relationships were observed between actual Type I error rates and any of: severity of violation, number of groups, ratio of average sample size to number of variables, and relative sizes of samples. In equal-groups experiments modelling type C violations in which the orthonormalized pooled covariance matrix departed at the more severe level from equal-diagonal-block form, actual Type I error rates for tests of both hypotheses tended to be liberal. Findings were more complex under the less severe level of structural departure. Empirical significance levels did not vary with the degree of interpopulation heterogeneity of orthonormalized covariance matrices. In type C experiments modelling negative sampling, tests of both hypotheses tended to be liberal. Degree of structural departure did not appear to influence actual Type I error rates but degree of interpopulation heterogeneity did. Actual Type I error rates in type C experiments modelling positive sampling were apparently related to the number of groups. When two populations were sampled, both tests tended to be conservative, while for three groups, the results were more complex. In general, under all types of violation the ratio of average group size to number of variables did not greatly affect actual Type I error rates. The report concludes with suggestions for practitioners considering use of the MMM procedure based upon the findings and recommends four avenues for future research on Type I error robustness of MMM analysis of variance. The matrix pool and computer programs used in the simulations are included in appendices.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Moravec, Radek. "Oceňování opcí a variance gama proces." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-18707.

Full text
Abstract:
The submitted work deals with option pricing. Mathematical approach is immediately followed by an economic interpretation. The main problem is to model the underlying uncertainities driving the stock price. Using two well-known valuation models, binomial model and Black-Scholes model, we explain basic principles, especially risk neutral pricing. Due to the empirical biases new models have been developped, based on pure jump process. Variance gamma process and its special symmetric case are presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdumuminov, Shuhrat, and David Emanuel Esteky. "Black-Litterman Model: Practical Asset Allocation Model Beyond Traditional Mean-Variance." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-32427.

Full text
Abstract:
This paper consolidates and compares the applicability and practicality of Black-Litterman model versus traditional Markowitz Mean-Variance model. Although well-known model such as Mean-Variance is academically sound and popular, it is rarely used among asset managers due to its deficiencies. To put the discussion into context we shed light on the improvement made by Fisher Black and Robert Litterman by putting the performance and practicality of both Black- Litterman and Markowitz Mean-Variance models into test. We will illustrate detailed mathematical derivations of how the models are constructed and bring clarity and profound understanding of the intuition behind the models. We generate two different portfolios, composing data from 10-Swedish equities over the course of 10-year period and respectively select 30-days Swedish Treasury Bill as a risk-free rate. The resulting portfolios orientate our discussion towards the better comparison of the performance and applicability of these two models and we will theoretically and geometrically illustrate the differences. Finally, based on extracted results of the performance of both models we demonstrate the superiority and practicality of Black-Litterman model, which in our particular case outperform traditional Mean- Variance model.
APA, Harvard, Vancouver, ISO, and other styles
5

Tjärnström, Fredrik. "Variance expressions and model reduction in system identification /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek730s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Finlay, Richard. "The Variance Gamma (VG) Model with Long Range Dependence." University of Sydney, 2009. http://hdl.handle.net/2123/5434.

Full text
Abstract:
Doctor of Philosophy (PhD)
This thesis mainly builds on the Variance Gamma (VG) model for financial assets over time of Madan & Seneta (1990) and Madan, Carr & Chang (1998), although the model based on the t distribution championed in Heyde & Leonenko (2005) is also given attention. The primary contribution of the thesis is the development of VG models, and the extension of t models, which accommodate a dependence structure in asset price returns. In particular it has become increasingly clear that while returns (log price increments) of historical financial asset time series appear as a reasonable approximation of independent and identically distributed data, squared and absolute returns do not. In fact squared and absolute returns show evidence of being long range dependent through time, with autocorrelation functions that are still significant after 50 to 100 lags. Given this evidence against the assumption of independent returns, it is important that models for financial assets be able to accommodate a dependence structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Robinson, Timothy J. "Dual Model Robust Regression." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/11244.

Full text
Abstract:
In typical normal theory regression, the assumption of homogeneity of variances is often not appropriate. Instead of treating the variances as a nuisance and transforming away the heterogeneity, the structure of the variances may be of interest and it is desirable to model the variances. Aitkin (1987) proposes a parametric dual model in which a log linear dependence of the variances on a set of explanatory variables is assumed. Aitkin's parametric approach is an iterative one providing estimates for the parameters in the mean and variance models through joint maximum likelihood. Estimation of the mean and variance parameters are interrelatedas the responses in the variance model are the squared residuals from the fit to the means model. When one or both of the models (the mean or variance model) are misspecified, parametric dual modeling can lead to faulty inferences. An alternative to parametric dual modeling is to let the data completely determine the form of the true underlying mean and variance functions (nonparametric dual modeling). However, nonparametric techniques often result in estimates which are characterized by high variability and they ignore important knowledge that the user may have regarding the process. Mays and Birch (1996) have demonstrated an effective semiparametric method in the one regressor, single-model regression setting which is a "hybrid" of parametric and nonparametric fits. Using their techniques, we develop a dual modeling approach which is robust to misspecification in either or both of the two models. Examples will be presented to illustrate the new technique, termed here as Dual Model Robust Regression.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Roh, Kyoungmin. "Evolutionary variance of gene network model via simulated annealing." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Letsoalo, Marothi Peter. "Assessing variance components of multilevel models pregnancy data." Thesis, University of Limpopo, 2019. http://hdl.handle.net/10386/2873.

Full text
Abstract:
Thesis (M. Sc. (Statistics)
Most social and health science data are longitudinal and additionally multilevel in nature, which means that response data are grouped by attributes of some cluster. Ignoring the differences and similarities generated by these clusters results to misleading estimates, hence motivating for a need to assess variance components (VCs) using multilevel models (MLMs) or generalised linear mixed models (GLMMs). This study has explored and fitted teenage pregnancy census data that were gathered from 2011 to 2015 by the Africa Centre at Kwa-Zulu Natal, South Africa. The exploration of these data revealed a two level pure hierarchy data structure of teenage pregnancy status for some years nested within female teenagers. To fit these data, the effects that census year (year) and three female characteristics (namely age (age), number of household membership (idhhms), number of children before observation year (nch) have on teenage pregnancy were examined. Model building of this work, firstly, fitted a logit gen eralised linear model (GLM) under the assumption that teenage pregnancy measurements are independent between females and secondly, fitted a GLMM or MLM of female random effect. A better fit GLMM indicated, for an additional year on year, a 0.203 decrease on the log odds of teenage pregnancy while GLM suggested a 0.21 decrease and 0.557 increase for each additional year on age and year, respectively. A GLM with only year effect uncovered a fixed estimate which is higher, by 0.04, than that of a better fit GLMM. The inconsistency in the effect of year was caused by a significant female cluster variance of approximately 0.35 that was used to compute the VCs. Given the effect of year, the VCs suggested that 9.5% of the differences in teenage pregnancy lies between females while 0.095 similarities (scale from 0 to 1) are for the same female. It was also revealed that year does not vary within females. Apart from the small differences between observed estimates of the fitted GLM and GLMM, this work produced evidence that accounting for cluster effect improves accuracy of estimates. Keywords: Multilevel Model, Generalised Linear Mixed Model, Variance Components, Hier archical Data Structure, Social Science Data, Teenage Pregnancy
APA, Harvard, Vancouver, ISO, and other styles
10

Brien, Christopher J. "Factorial linear model analysis." Title page, table of contents and summary only, 1992. http://thesis.library.adelaide.edu.au/public/adt-SUA20010530.175833.

Full text
Abstract:
"February 1992" Bibliography: leaf 323-344. Electronic publication; Full text available in PDF format; abstract in HTML format. Develops a general strategy for factorial linear model analysis for experimental and observational studies, an iterative, four-stage, model comparison procedure. The approach is applicable to studies characterized as being structure-balanced, multitiered and based on Tjur structures unless the structure involves variation factors when it must be a regular Tjur structure. It covers a wide range of experiments including multiple-error, change-over, two-phase, superimposed and unbalanced experiments. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001.
APA, Harvard, Vancouver, ISO, and other styles
11

Caples, Jerry Joseph. "Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gumedze, Freedom Nkhululeko. "A variance shilf model for outlier detection and estimation in linear and linear mixed models." Doctoral thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/4381.

Full text
Abstract:
Includes abstract.
Includes bibliographical references.
Outliers are data observations that fall outside the usual conditional ranges of the response data.They are common in experimental research data, for example, due to transcription errors or faulty experimental equipment. Often outliers are quickly identified and addressed, that is, corrected, removed from the data, or retained for subsequent analysis. However, in many cases they are completely anomalous and it is unclear how to treat them. Case deletion techniques are established methods in detecting outliers in linear fixed effects analysis. The extension of these methods to detecting outliers in linear mixed models has not been entirely successful, in the literature. This thesis focuses on a variance shift outlier model as an approach to detecting and assessing outliers in both linear fixed effects and linear mixed effects analysis. A variance shift outlier model assumes a variance shift parameter, wi, for the ith observation, where wi is unknown and estimated from the data. Estimated values of wi indicate observations with possibly inflated variances relative to the remainder of the observations in the data set and hence outliers. When outliers lurk within anomalous elements in the data set, a variance shift outlier model offers an opportunity to include anomalies in the analysis, but down-weighted using the variance shift estimate wi. This down-weighting might be considered preferable to omitting data points (as in case-deletion methods). For very large values of wi a variance shift outlier model is approximately equivalent to the case deletion approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Gumedze, Freedom Nkhululeko. "A variance shift model for outlier detection and estimation in linear and linear mixed models." Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/4380.

Full text
Abstract:
Outliers are data observations that fall outside the usual conditional ranges of the response data.They are common in experimental research data, for example, due to transcription errors or faulty experimental equipment. Often outliers are quickly identified and addressed, that is, corrected, removed from the data, or retained for subsequent analysis. However, in many cases they are completely anomalous and it is unclear how to treat them. Case deletion techniques are established methods in detecting outliers in linear fixed effects analysis. The extension of these methods to detecting outliers in linear mixed models has not been entirely successful, in the literature. This thesis focuses on a variance shift outlier model as an approach to detecting and assessing outliers in both linear fixed effects and linear mixed effects analysis. A variance shift outlier model assumes a variance shift parameter, !i, for the ith observation, where !i is unknown and estimated from the data. Estimated values of !i indicate observations with possibly inflated variances relative to the remainder of the observations in the data set and hence outliers. When outliers lurk within anomalous elements in the data set, a variance shift outlier model offers an opportunity to include anomalies in the analysis, but down-weighted using the variance shift estimate Ë!i. This down-weighting might be considered preferable to omitting data points (as in case-deletion methods). For very large values of !i a variance shift outlier model is approximately equivalent to the case deletion approach. We commence with a detailed review of parameter estimation and inferential procedures for the linear mixed model. The review is necessary for the development of the variance shift outlier model as a method for detecting outliers in linear fixed and linear mixed models. This review is followed by a discussion of the status of current research into linear mixed model diagnostics. Different types of residuals in the linear mixed model are defined. A decomposition of the leverage matrix for the linear mixed model leads to interpretable leverage measures. ii A detailed review of a variance shift outlier model in linear fixed effects analysis is given. The purpose of this review is firstly, to gain insight into the general case (the linear mixed model) and secondly, to develop the model further in linear fixed effects analysis. A variance shift outlier model can be formulated as a linear mixed model so that the calculations required to estimate the parameters of the model are those associated with fitting a linear mixed model, and hence the model can be fitted using standard software packages. Likelihood ratio and score test statistics are developed as objective measures for the variance shift estimates. The proposed test statistics initially assume balanced longitudinal data with a Gaussian distributed response variable. The dependence of the proposed test statistics on the second derivatives of the log-likelihood function is also examined. For the single-case outlier in linear fixed effects analysis, analytical expressions for the proposed test statistics are obtained. A resampling algorithm is proposed for assessing the significance of the proposed test statistics and for handling the problem of multiple testing. A variance shift outlier model is then adapted to detect a group of outliers in a fixed effects model. Properties and performance of the likelihood ratio and score test statistics are also investigated. A variance shift outlier model for detecting single-case outliers is also extended to linear mixed effects analysis under Gaussian assumptions for the random effects and the random errors. The variance parameters are estimated using the residual maximum likelihood method. Likelihood ratio and score tests are also constructed for this extended model. Two distinct computing algorithms which constrain the variance parameter estimates to be positive, are given. Properties of the resulting variance parameter estimates from each computing algorithm are also investigated. A variance shift outlier model for detecting single-case outliers in linear mixed effects analysis is extended to detect groups of outliers or subjects having outlying profiles with random intercepts and random slopes that are inconsistent with the corresponding model elements for the remaining subjects in the data set. The issue of influence on the fixed effects under a variance shift outlier model is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Talbert, Matthew Brandon. "A column based variance analysis approach to static reservoir model upgridding." Texas A&M University, 2008. http://hdl.handle.net/1969.1/86055.

Full text
Abstract:
The development of coarsened reservoir simulation models from high resolution geologic models is a critical step in a simulation study. The optimal coarsening sequence becomes particularly challenging in a fluvial channel environment where the channel sinuosity and orientation can result in pay/non-pay juxtaposition in many regions of the geologic model. The optimal coarsening sequence is also challenging in tight gas sandstones where sharp changes between sandstone and shale beds are predominant and maintaining the pay/non-pay distinction is difficult. Under such conditions, a uniform coarsening will result in mixing of pay and non-pay zones and will likely result in geologically unrealistic simulation models which create erroneous performance predictions. In particular, the upgridding algorithm must keep pay and non-pay zones distinct through a non-uniform coarsening of the geologic model. We present a coarsening algorithm to determine an optimal reservoir simulation grid by grouping fine scale geologic model cells into effective simulation cells. Our algorithm groups the layers in such a way that the heterogeneity measure of an appropriately defined static property is minimized within the layers and maximized between the layers. The optimal number of layers is then selected based on an analysis resulting in a minimum loss of heterogeneity. We demonstrate the validity of the optimal gridding by applying our method to a history matched waterflood in a structurally complex and faulted offshore turbiditic oil reservoir. The field is located in a prolific hydrocarbon basin offshore South America. More than 10 years of production data from up to 8 producing wells are available for history matching. We demonstrate that any coarsening beyond the degree indicated by our analysis overly homogenizes the properties on the simulation grid and alters the reservoir response. An application to a tight gas sandstone developed by Schlumberger DCS is also used in our verification of our algorithm. The specific details of the tight gas reservoir are confidential to Schlumberger's client. Through the use of a reservoir section we demonstrate the effectiveness of our algorithm by visually comparing the reservoir properties to a Schlumberger fine scale model.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Hui-Ling. "Jackknife Empirical Likelihood for the Variance in the Linear Regression Model." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_theses/129.

Full text
Abstract:
The variance is the measure of spread from the center. Therefore, how to accurately estimate variance has always been an important topic in recent years. In this paper, we consider a linear regression model which is the most popular model in practice. We use jackknife empirical likelihood method to obtain the interval estimate of variance in the regression model. The proposed jackknife empirical likelihood ratio converges to the standard chi-squared distribution. The simulation study is carried out to compare the jackknife empirical likelihood method and standard method in terms of coverage probability and interval length for the confidence interval of variance from linear regression models. The proposed jackknife empirical likelihood method has better performance. We also illustrate the proposed methods using two real data sets.
APA, Harvard, Vancouver, ISO, and other styles
16

Rwexana, Kwaku. "Pricing a Bermudan option under the constant elasticity of variance model." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/27374.

Full text
Abstract:
This dissertation investigates the computational efficiency and accuracy of three methodologies in the pricing of a Bermudan option, under the constant elasticity of variance (CEV) model. The pricing methods considered are the finite difference method, least squares Monte Carlo method and recursive marginal quantization (RMQ) method. Specific emphasis will be on RMQ, as it is the most recent method. A plain vanilla European option is initially priced using the above mentioned methods, and the results obtained are compared to the Black-Scholes option pricing formula to determine their viability as pricing methods. Once the methods have been validated for the European option, a Bermudan option is then priced for these methods. Instead of using the Black-Scholes option pricing formula for comparison of the prices obtained, a high-resolution finite difference scheme is used as a proxy in the absence of an analytical solution. One of the main advantages of the recursive marginal quantization (RMQ) method is that the continuation value of the option is computed at almost no additional computational cost, this with other contributing factors leads to a computationally efficient and accurate method for pricing.
APA, Harvard, Vancouver, ISO, and other styles
17

Febrer, Pedro Maria Ulisses dos Santos Jalhay. "Residue sum formula for pricing options under the variance Gamma Model." Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20802.

Full text
Abstract:
Mestrado em Mathematical Finance
O resultado principal desta dissertação é a demonstração da fórmula de serie de soma tripla para o preço de uma opção Europeia induzido por um processo Variance Gamma. Com esta intenção, apresentamos certas propriedades e noções sobre processos de Lévy e análise complexa multidimensional, dando ênfase à aplicação do cálculo de resíduos ao integral Mellin-Barnes. Subsequentemente, iremos construir a representação na forma do integral Mellin-Barnes, em C^3, para o preço de uma opção e, apoiados pelo anteriormente mencionado cálculo de resíduos, deduziremos a representação em serie de soma tripla para o preço de uma opção Europeia e os seus correspondentes gregos. Para terminar, dando uso à nova formula, serão computados e discutidos alguns valores para um caso de estudo particular.
The main result of this dissertation is the proof of the triple sum series formula for the price of an European call option driven by the Variance Gamma process. With this intention, we present some notions and properties of Lévy processes and multidimensional complex analysis, with emphasis on the application of residue calculus to the Mellin-Barnes integral. Subsequently, we construct the Mellin-Barnes integral representation, in C^3, for the price of the option and, buttressed with the aforementioned residue calculus, we deduce the triple sum series representation for the price of the European option and its corresponding greeks. Finally, with the use of the new formula, some values for a particular case study are computed and discussed.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
18

Al, Hajri Abdullah Said Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Logistics technology transfer model." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2008. http://handle.unsw.edu.au/1959.4/41469.

Full text
Abstract:
A consecutive number of studies on the adoption trend of logistics technology since 1988 revealed that logistics organizations are not in the frontier when it comes to adopting new technology and this delayed adoption creates an information gap. In the advent of supply chain management and the strategic position of logistics, the need for accurate and timely information to accompany the logistics executives became more important than ever before. Given the integrative nature of logistics technology, failure to implement the technology successfully could result in writing off major investments in developing and implementing the technology or even in abandoning the strategic initiatives underpinned by these innovations. Consequently, the need to employ effective strategies and models to cope with these uncertainties is rather crucial. This thesis addresses the aspect of uncertainty in implementation success by process and factor research models. Process research approach focuses on the sequence of events in the technology transfer process that occurs over time. It explains the story that explains the degree of association between these sequences and implementation success. Through content analysis, this research gathers, extracts, and categorizes process data of actual stories of logistics technology adoption and implementations in organizations that are published in literature. The extracted event sequences are then analyzed using optimal matching from natural science and grouped using cluster analysis. Four patterns were revealed that organizations follow to transfer logistics technology namely, formal minimalist, mutual adaptation, development concerned, and organizational roles dispenser. Factors that contribute to successful implementation in each pattern were defined as the crucial and necessary events that characterized and differentiated each pattern from others. The factor approach identifies the potential predictors of successful technology implementation and tests empirical association between predictors and outcomes. This research develops a logistics technology success model. In developing the model, various streams of research were investigated including logistics, information systems, and organizational psychology. The model is tested using a questionnaire survey study. The data were collected from Australian companies which have recently adopted and implemented logistics technology. The results of a partial least squares structured equation modeling provide strong support for the model constructs and valuable insights to logistics/supply chain managers. The last study reports a convergent triangulation study using multiple case study of three Australian companies which have implemented logistics technology. A within and a cross case analysis of the three cases provide cross validation for the results of the other two studies. The results provided high predictive validity for the two models. Furthermore, the case study approach was so beneficial in explaining and contextualizing the linkages of the factor-based model and in confirming the importance of the crucial events in the process-based model. The thesis concludes with a research and managerial implications chapter which is devoted for logistics/supply chain managers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Brendan Chee-Seng Banking &amp Finance Australian School of Business UNSW. "Incorporating discontinuities in value-at-risk via the poisson jump diffusion model and variance gamma model." Awarded by:University of New South Wales, 2007. http://handle.unsw.edu.au/1959.4/37201.

Full text
Abstract:
We utilise several asset pricing models that allow for discontinuities in the returns and volatility time series in order to obtain estimates of Value-at-Risk (VaR). The first class of model that we use mixes a continuous diffusion process with discrete jumps at random points in time (Poisson Jump Diffusion Model). We also apply a purely discontinuous model that does not contain any continuous component at all in the underlying distribution (Variance Gamma Model). These models have been shown to have some success in capturing certain characteristics of return distributions, a few being leptokurtosis and skewness. Calibrating these models onto the returns of an index of Australian stocks (All Ordinaries Index), we then use the resulting parameters to obtain daily estimates of VaR. In order to obtain the VaR estimates for the Poisson Jump Diffusion Model and the Variance Gamma Model, we introduce the use of an innovation from option pricing techniques, which concentrates on the more tractable characteristic functions of the models. Having then obtained a series of VaR estimates, we then apply a variety of criteria to assess how each model performs and also evaluate these models against the traditional approaches to calculating VaR, such as that suggested by J.P. Morgan???s RiskMetrics. Our results show that whilst the Poisson Jump Diffusion model proved the most accurate at the 95% VaR level, neither the Poisson Jump Diffusion or Variance Gamma models were dominant in the other performance criteria examined. Overall, no model was clearly superior according to all the performance criteria analysed, and it seems that the extra computational time required to calibrate the Poisson Jump Diffusion and Variance Gamma models for the purposes of VaR estimation do not provide sufficient reward for the additional effort than that currently employed by Riskmetrics.
APA, Harvard, Vancouver, ISO, and other styles
20

Chauvet, Pierre. "Elements d'analyse structurale des fai-k a 1 dimension." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0070.

Full text
Abstract:
L'information structurale d'une fai d'ordre k definie sur une maille reguliere monodimensionnelle est contenue dans ses accroissements d'ordre k+1. On cherche a etablir la relation entre les covariances d'accroissements (experiences) et la covariance generalisee (modele) et a l'utiliser dans les deux sens. Une tentative d'expliciter la covariance generalisee a l'aide du variogramme generalise s'est soldee par un echec
APA, Harvard, Vancouver, ISO, and other styles
21

Petkovic, Danijela. "Pricing variance swaps by using two methods : replication strategy and a stochastic volatility model." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2197.

Full text
Abstract:

In this paper we investigate pricing of variance swaps contracts. The

literature is mostly dedicated to the pricing using replication with

portfolio of vanilla options. In some papers the valuation with stochastic

volatility models is discussed as well. Stochastic volatility is becoming

more and more interesting to the investors. Therefore we decided to

perform valuation with the Heston stochastic volatility model, as well

as by using replication strategy.

The thesis was done at SunGard Front Arena, so for testing the replica-

tion strategy Front Arena software was used. For calibration and testing

of the Heston model we used MatLab.

APA, Harvard, Vancouver, ISO, and other styles
22

Cheng, Enoch. "Connections between no-arbitrage and the continuous time mean-variance framework." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1836268281&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Newton, Wesley E. "Data Analysis Using Experimental Design Model Factorial Analysis of Variance/Covariance (DMAOVC.BAS)." DigitalCommons@USU, 1985. https://digitalcommons.usu.edu/etd/6378.

Full text
Abstract:
DMAOVC.BAS is a computer program written in the compiler version of microsoft basic which performs factorial analysis of variance/covariance with expected mean squares. The program accommodates factorial and other hierarchical experimental designs with balanced sets of data. The program is writ ten for use on most modest sized microprocessors, in which the compiler is available. The program is parameter file driven where the parameter file consists of the response variable structure, the experimental design model expressed in a similar structure as seen in most textbooks, information concerning the factors (i.e. fixed or random, and the number of levels), and necessary information to perform covariance analysis. The results of the analysis are written to separate files in a format that can be used for reporting purposes and further computations if needed.
APA, Harvard, Vancouver, ISO, and other styles
24

Randell, David. "Bayes linear variance learning for mixed linear temporal models." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3646/.

Full text
Abstract:
Modelling of complex corroding industrial systems is ritical to effective inspection and maintenance for ssurance of system integrity. Wall thickness and corrosion rate are modelled for multiple dependent corroding omponents, given observations of minimum wall thickness per component. At each inspection, partial observations of the system are considered. A Bayes Linear approach is adopted simplifying parameter estimation and avoiding often unrealistic distributional assumptions. Key system variances are modelled, making exchangeability assumptions to facilitate analysis for sparse inspection time-series. A utility based criterion is used to assess quality of inspection design and aid decision making. The model is applied to inspection data from pipework networks on a full-scale offshore platform.
APA, Harvard, Vancouver, ISO, and other styles
25

Lahti, Katharine Gage. "Estimation of Variance Components in Finite Polygenic Models and Complex Pedigrees." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/46496.

Full text
Abstract:
Various models of the genetic architecture of quantitative traits have been considered to provide the basis for increased genetic progress. The finite polygenic model (FPM), which contains a finite number of unlinked polygenic loci, is proposed as an improvement to the infinitesimal model (IM) for estimating both additive and dominance variance for a wide range of genetic models. Analysis under an additive five-loci FPM by either a deterministic Maximum Likelihood (DML) or a Markov chain Monte Carlo (MCMC) Bayesian method (BGS) produced accurate estimates of narrow-sense heritability (0.48 to 0.50 with true values of h2 = 0.50) for phenotypic data from a five-generation, 6300-member pedigree simulated without selection under either an IM, FPMs containing five or forty loci with equal homozygote difference, or a FPM with eighteen loci of diminishing homozygote difference. However, reducing the analysis to a three- or four-loci FPM resulted in some biased estimates of heritability (0.53 to 0.55 across all genetic models for the 3-loci BGS analysis and 0.47 to 0.48 for the 40-loci FPM and the infinitesimal model for both the 3- and 4-loci DML analyses). The practice of cutting marriage and inbreeding loops utilized by the DML method expectedly produced overestimates of additive genetic variance (55.4 to 66.6 with a true value of sigma squared sub a = 50.0 across all four genetic models) for the same pedigree structure under selection, while the BGS method was mostly unaffected by selection, except for slight overestimates of additive variance (55.0 and 58.8) when analyzing the 40-loci FPM and the infinitesimal model, the two models with the largest numbers of loci. Changes to the BGS method to accommodate estimation of dominance variance by sampling genotypes at individual loci are explored. Analyzing the additive data sets with the BGS method, assuming a five-loci FPM including both additive and dominance effects, resulted in accurate estimates of additive genetic variance (50.8 to 52.2 for true sigma squared sub a = 50.0) and no significant dominance variance (3.7 to 3.9) being detected where none existed. The FPM has the potential to produce accurate estimates of dominance variance for large, complex pedigrees containing inbreeding, whereas the IM suffers severe limitations under inbreeding. Inclusion of dominance effects into the genetic evaluations of livestock, with the potential increase in accuracy of additive breeding values and added ability to exploit specific combining abilities, is the ultimate goal.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Ahmed, Yasir. "A Model-Based Approach to Demodulation of Co-Channel MSK Signals." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/36265.

Full text
Abstract:
Co-channel interference limits the capacity of cellular systems, reduces the throughput of wireless local area networks, and is the major hurdle in deployment of high altitude communication platforms. It is also a problem for systems operating in unlicensed bands such as the 2.4 GHz ISM band and for narrowband systems that have been overlaid with spread spectrum systems. In this work we have developed model-based techniques for the demodulation of co-channel MSK signals. It is shown that MSK signals can be written in the linear model form, hence a minimum variance unbiased (MVU) estimator exists that satisfies the Cramer-Rao lower bound (CRLB) with equality. This framework allows us to derive the best estimators for a single-user and a two-user case. These concepts can also be extended to wideband signals and it is shown that the MVU estimator for Direct Sequence Spread Spectrum signals is in fact a decorrelator-based multiuser detector. However, this simple linear representation does not always exist for continuous phase modulations. Furthermore, these linear estimators require perfect channel state information and phase synchronization at the receiver, which is not always implemented in wireless communication systems. To overcome these shortcomings of the linear estimation techniques, we employed an autoregressive modeling approach. It is well known that the AR model can accurately represent peaks in the spectrum and therefore can be used as a general FM demodulator. It does not require knowledge of the exact signal model or phase synchronization at the receiver. Since it is a non-coherent reception technique, its performance is compared to that of the limiter discriminator. Simulation results have shown that model-based demodulators can give significant gains for certain phase and frequency offsets between the desired signal and an interferer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Jinsong. "Variance analysis for kernel smoothing of a varying-coefficient model with longitudinal data /." Electronic version (PDF), 2003. http://dl.uncw.edu/etd/2003/chenj/jinsongchen.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

VELLOSO, MARIA LUIZA FERNANDES. "TIME SERIES MODEL WITH NEURAL COEFFICIENTS FOR NONLINEAR PROCESSES IN MEAN AND VARIANCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1999. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8103@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta tese apresenta uma nova classe de modelos não lineares inspirada no modelo ARN, apresentado por Mellem, 1997. Os modelos definidos nesta classe são aditivos com coeficientes variáveis modelados por redes neurais e, tanto a média quanto a variância condicionais, são modeladas explicitamente. Neste trabalho podem ser identificadas quatro partes principais: um estudo sobre os modelos mais comuns encontrados na literatura de séries temporais; um estudo sobre redes neurais, focalizando a rede backpropagation; a definição do modelo proposto e os métodos utilizados na estimação dos parâmetros e o estudo de casos. Modelos aditivos têm sido escolha preferencial na modelagem não linear: paramétrica ou não paramétrica, de média ou de variância condicional. Além disso, tanto a idéia de modelos de coeficientes variáveis quanto a de modelos híbridos. que reúnem paradigmas diferentes, não é novidade. Por esta razão, foi traçado um panorama dos modelos não lineares mais encontrados na literatura de séries temporais, focalizando-se naqueles que tinham relacionamento mais estreito com a classe de modelos proposta neste trabalho. No estudo sobre redes neurais, além da apresentação de seus conceitos básicos, analisou- se a rede backpropagation, ponto de partida para a modelagem dos coeficientes variáveis. Esta escolha deveu- se à constatação da predominância e constância no uso desta rede, ou de suas variantes, nos estudos e aplicações em séries temporais. Demonstrou-se que os modelos propostos são aproximadores universais e podem ser utilizados para modelar a variância condicional de uma série temporal. Foram desenvolvidos algoritmos, a partir dos métodos de mínimos quadrados e de máxima verossimilhança, para a estimação dos pesos, através da adaptação do algoritmo de backpropagation à esta nova classe de modelos. Embora tenham sido sugeridos outros algoritmos de otimização, este mostrou-se suficientemente apropriado para os casos testados neste trabalho. O estudo de casos foi dividido em duas partes: testes com séries sintéticas e testes com séries reais. Estas últimas, normalmente, utilizadas como benchmarking por analistas de séries temporais não lineares. Para auxiliar na identificação das variáveis do modelo, foram utilizadas regressões de lag não paramétricas. Os resultados obtidos foram comparados com outras modelagens e foram superiores ou, no mínimo, equivalentes. Além disso, é mostrado que o modelo híbrido proposto engloba vários destes outros modelos.
A class of nonlinear additive varyng coefficient models is introduced in this thesis, inspired by ARN model, presented by Mellem, 1997. the coefficients are explicitly modelled. This work is divided in four major parts: a study of most common models in the time series literature; a study of neural networks, focused in backpropagation network; the presentation of the proposed models and the methods used for parameter estimation: and the case studies. Additive models has been the preferencial choice in nonlinear modelling: idea of varyng coefficient and of hybrid models, aren`t news. Hence, the models in the time series literature were analysed, assentialy those closely related with the class of models proposed in this work. Sinse the predominance and constancy in the use of backpropagation network, or its variants, in time series studies and applications, was confirmed by this work, this network was analyzed with more details. This work demonstrated that the proposed models are universal aproximators and could model explicity conditional variance. Moreover, gradient calculus and algorithms for the weight estimation were developed based on the main estimation methods: least mean squares and maximum likelihood. Even though other gradient calculus and otimization algorithms have been sugested, this one was sufficiently adequate for the studied cases. The case studies were divided in two parts: tests with synthetic series and for the nonlinear time series analysts. The obtained results were compared with other models and were superior or, at least, equivalent. Also, these results confirmed that the proposed hybrid model encompass several of the others models
APA, Harvard, Vancouver, ISO, and other styles
29

Jung, Jeesun. "High resolution linkage and association study of quantitative trait loci." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2681.

Full text
Abstract:
As a large number of single nucleotide polymorphisms (SNPs) and microsatellite markers are available, high resolution mapping employing multiple markers or multiple allele markers is an important step to identify quantitative trait locus (QTL) of complex human disease. For many complex diseases, quantitative phenotype values contain more information than dichotomous traits do. Much research has been done on conducting high resolution mapping using information of linkage and linkage disequilibrium. The most commonly employed approaches for mapping QTL are pedigree-based linkage analysis and population-based association analysis. As one of the methods dealing with multiple alleles markers, mixed models are developed to work out family-based association study with the information of transmitted allele and nontransmitted allele from one parent to offspring. For multiple markers, variance component models are proposed to perform association study and linkage analysis simultaneously. Linkage analysis provides suggestive linkage based on a broad chromosome region and is robust to population admixtures. One the other hand, allelic association due to linkage disequilibrium (LD) usually operates over very short genetic distance, but is affected by population stratification. Combining both approaches plays a synergistic role in overcoming their limitations and in increasing the efficiency and effectiveness of gene mapping.
APA, Harvard, Vancouver, ISO, and other styles
30

Yue, Rong-xian. "Applications of quasi-Monte Carlo methods in model-robust response surface designs." HKBU Institutional Repository, 1997. http://repository.hkbu.edu.hk/etd_ra/178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Schemann, Vera, Bjorn Stevens, Verena Grützun, and Johannes Quaas. "Scale dependency of total water variance and its implication for cloud parameterizations: Scale dependency of total water variance and its implication for cloudparameterizations." American Meteorological Society, 2013. https://ul.qucosa.de/id/qucosa%3A13462.

Full text
Abstract:
The scale dependency of variance of total water mixing ratio is explored by analyzing data from a general circulation model (GCM), a numerical weather prediction model (NWP), and large-eddy simulations (LESs). For clarification, direct numerical simulation (DNS) data are additionally included, but the focus is placed on defining a general scaling behavior for scales ranging from global down to cloud resolving. For this, appropriate power-law exponents are determined by calculating and approximating the power density spectrum. The large-scale models (GCM and NWP) show a consistent scaling with a power-law exponent of approximately 22. For the high-resolution LESs, the slope of the power density spectrum shows evidence of being somewhat steeper, although the estimates are more uncertain. Also the transition between resolved and parameterized scales in a current GCM is investigated. Neither a spectral gap nor a strong scale break is found, but a weak scale break at high wavenumbers cannot be excluded. The evaluation of the parameterized total water variance of a state-of-the-art statistical scheme shows that the scale dependency is underestimated by this parameterization. This study and the discovered general scaling behavior emphasize the need for a development of scale-dependent parameterizations.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Mou Chin. "An empirical test of variance gamma options pricing model on Hang Seng index options." HKBU Institutional Repository, 2000. http://repository.hkbu.edu.hk/etd_ra/263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hartman, Joel, and Osvald Wiklander. "Evaluating forecasts from the GARCH(1,1)-model for Swedish Equities." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-178120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Juutilainen, I. (Ilmari). "Modelling of conditional variance and uncertainty using industrial process data." Doctoral thesis, University of Oulu, 2006. http://urn.fi/urn:isbn:9514282620.

Full text
Abstract:
Abstract This thesis presents methods for modelling conditional variance and uncertainty of prediction at a query point on the basis of industrial process data. The introductory part of the thesis provides an extensive background of the examined methods and a summary of the results. The results are presented in detail in the original papers. The application presented in the thesis is modelling of the mean and variance of the mechanical properties of steel plates. Both the mean and variance of the mechanical properties depend on many process variables. A method for predicting the probability of rejection in a quali?cation test is presented and implemented in a tool developed for the planning of strength margins. The developed tool has been successfully utilised in the planning of mechanical properties in a steel plate mill. The methods for modelling the dependence of conditional variance on input variables are reviewed and their suitability for large industrial data sets are examined. In a comparative study, neural network modelling of the mean and dispersion narrowly performed the best. A method is presented for evaluating the uncertainty of regression-type prediction at a query point on the basis of predicted conditional variance, model variance and the effect of uncertainty about explanatory variables at early process stages. A method for measuring the uncertainty of prediction on the basis of the density of the data around the query point is proposed. The proposed distance measure is utilised in comparing the generalisation ability of models. The generalisation properties of the most important regression learning methods are studied and the results indicate that local methods and quadratic regression have a poor interpolation capability compared with multi-layer perceptron and Gaussian kernel support vector regression. The possibility of adaptively modelling a time-varying conditional variance function is disclosed. Two methods for adaptive modelling of the variance function are proposed. The background of the developed adaptive variance modelling methods is presented.
APA, Harvard, Vancouver, ISO, and other styles
35

Schemann, Vera, Bjorn Stevens, Verena Grützun, and Johannes Quaas. "Scale dependency of total water variance and its implication for cloud parameterizations." Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-177479.

Full text
Abstract:
The scale dependency of variance of total water mixing ratio is explored by analyzing data from a general circulation model (GCM), a numerical weather prediction model (NWP), and large-eddy simulations (LESs). For clarification, direct numerical simulation (DNS) data are additionally included, but the focus is placed on defining a general scaling behavior for scales ranging from global down to cloud resolving. For this, appropriate power-law exponents are determined by calculating and approximating the power density spectrum. The large-scale models (GCM and NWP) show a consistent scaling with a power-law exponent of approximately 22. For the high-resolution LESs, the slope of the power density spectrum shows evidence of being somewhat steeper, although the estimates are more uncertain. Also the transition between resolved and parameterized scales in a current GCM is investigated. Neither a spectral gap nor a strong scale break is found, but a weak scale break at high wavenumbers cannot be excluded. The evaluation of the parameterized total water variance of a state-of-the-art statistical scheme shows that the scale dependency is underestimated by this parameterization. This study and the discovered general scaling behavior emphasize the need for a development of scale-dependent parameterizations.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Ze. "Estimating reliability under a generalizability theory model for writing scores in C-base." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/4292.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2005.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (January 10, 2007) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
37

Hirani, Shyam, and Jonas Wallström. "The Black-Litterman Asset Allocation Model : An Empirical Comparison to the Classical Mean-Variance Framework." Thesis, Linköpings universitet, Nationalekonomi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111570.

Full text
Abstract:
Within the scope of this thesis, the Black-Litterman Asset Allocation Model (as presented in He & Litterman, 1999) is compared to the classical mean-variance framework by simulating past performance of portfolios constructed by both models using identical input data. A quantitative investment strategy which favours stocks with high dividend yield rates is used to generate private views about the expected excess returns for a fraction of the stocks included in the sample. By comparing the ex-post risk-return characteristics of the portfolios and performing ample sensitivity analysis with respect to the numerical values assigned to the input variables, we evaluate the two models’ suitability for different categories of portfolio managers. As a neutral benchmark towards which both portfolios can be measured, a third market-capitalization-weighted portfolio is constructed from the same investment universe. The empirical data used for the purpose of our simulations consists of total return indices for 23 of the 30 stocks included in the OMXS30 index as of the 21st of February 2014 and stretches between January of 2003 and December of 2013.   The results of our simulations show that the Black-Litterman portfolio has delivered risk-adjusted return which is superior not only to that of its market-capitalization-weighted counterpart but also to that of the classical mean-variance portfolio. This result holds true for four out of five simulated strengths of the investment strategy under the assumption of zero transaction costs, a rebalancing frequency of 20 trading days, an estimated risk aversion parameter of 2.5 and a five per cent uncertainty associated with the CAPM prior. Sensitivity analysis performed by examining how the results are affected by variations in these input variables has also shown notable differences in the sensitivity of the results obtained from the two models. While the performance of the Black-Litterman portfolio does undergo material changes as the inputs are varied, these changes are nowhere near as profound as those exhibited by the classical mean-variance portfolio.   In the light of our empirical results, we also conclude that there are mainly two aspects which the portfolio manager ought to consider before committing to one model rather than the other. Firstly, the nature behind the views generated by the investment strategy needs to be taken into account. For the implementation of views which are of an α-driven character, the dynamics of the Black-Litterman model may not be as appropriate as for views which are believed to also influence the expected return on other securities. Secondly, the soundness of using market-capitalization weights as a benchmark towards which the final solution will gravitate needs to be assessed. Managers who strive to achieve performance which is fundamentally uncorrelated to that of the market index may want to either reconsider the benchmark weights or opt for an alternative model.
APA, Harvard, Vancouver, ISO, and other styles
38

Griffiths, Kristi L. "Model selection and analysis tools in response surface modeling of the process mean and variance." Diss., Virginia Tech, 1995. http://hdl.handle.net/10919/38567.

Full text
Abstract:
Product improvement is a serious issue facing industry today. And while response surface methods have been developed which address the process mean involved in improving the product there has been little research done on the process variability. Lack of quality in a product can be attributed to its inconsistency in performance thereby highlighting the need for a methodology which addresses process variability. The key to working with the process variability comes in the handling of the two types of factors which make up the product design: control and noise factors. Control factors can be fixed in both the lab setting and the real application. However, while the noise factors can be fixed in the lab setting, they are assumed to be random in the real application. A response-model can be created which models the response as a function of both the control and noise factors. This work introduces criteria for selecting an appropriate response-model which can be used to create accurate models for both the process mean and process variability. These two models can then be used to identify settings of the control factors which minimize process variability while maintaining an acceptable process mean. If the response-model is known, or at least well estimated, response surface methods can be extended to building various confidence regions related to the process variance. Among these are a confidence region on the location of minimum process variance and a confidence region on the ratio of the process variance to the error variance. It is easy to see the importance for research on the process variability and this work offers practical methods for improving the design of a product.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Hongcheng, Li. "Multivariate Extensions of CUSUM Procedure." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1185558637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ozol-Godfrey, Ayca. "Understanding Scaled Prediction Variance Using Graphical Methods for Model Robustness, Measurement Error and Generalized Linear Models for Response Surface Designs." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/30185.

Full text
Abstract:
Graphical summaries are becoming important tools for evaluating designs. The need to compare designs in term of their prediction variance properties advanced this development. A recent graphical tool, the Fraction of Design Space plot, is useful to calculate the fraction of the design space where the scaled prediction variance (SPV) is less than or equal to a given value. In this dissertation we adapt FDS plots, to study three specific design problems: robustness to model assumptions, robustness to measurement error and design properties for generalized linear models (GLM). This dissertation presents a graphical method for examining design robustness related to the SPV values using FDS plots by comparing designs across a number of potential models in a pre-specified model space. Scaling the FDS curves by the G-optimal bounds of each model helps compare designs on the same model scale. FDS plots are also adapted for comparing designs under the GLM framework. Since parameter estimates need to be specified, robustness to parameter misspecification is incorporated into the plots. Binomial and Poisson examples are used to study several scenarios. The third section involves a special type of response surface designs, mixture experiments, and deals with adapting FDS plots for two types of measurement error which can appear due to inaccurate measurements of the individual mixture component amounts. The last part of the dissertation covers mixture experiments for the GLM case and examines prediction properties of mixture designs using the adapted FDS plots.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Shen, Xia. "Novel Statistical Methods in Quantitative Genetics : Modeling Genetic Variance for Quantitative Trait Loci Mapping and Genomic Evaluation." Doctoral thesis, Uppsala universitet, Beräknings- och systembiologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-170091.

Full text
Abstract:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision.  Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes.  The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
APA, Harvard, Vancouver, ISO, and other styles
42

Pasos, Jose E. "Mean-variance optimal portfolios for Lévy processes and a singular stochastic control model for capacity expansion." Thesis, London School of Economics and Political Science (University of London), 2018. http://etheses.lse.ac.uk/3771/.

Full text
Abstract:
In the first part of the thesis, the problem of determining the optimal capacity expansion strategy for a firm operating within a random economic environment is studied. The underlying market uncertainty is modelled by means of a general one-dimensional positive diffusion with possible absorption at 0. The objective is to maximise a performance criterion that involves a general running payoff function and associates a cost with each capacity increase up to the first hitting time of 0, at which time the firm defaults. The resulting optimisation problem takes the form of a degenerate twodimensional singular stochastic control problem that is explicitly solved. The general results are further illustrated in the special cases in which market uncertainty is modelled by a Brownian motion with drift, a geometric Brownian motion or a square-root mean-reverting process such as the one in the CIR model. The second part of the thesis presents a study of mean-variance portfolio selection for asset prices modelled by Lévy processes under conic constraints on trading strategies. In this context, the combination of the price processes’ jumps and the trading constraints gives rise to a new qualitative behaviour of the optimal strategies. The existence and the behaviour of the optimal strategies are related to different no-arbitrage conditions that can be directly expressed in terms of the Lévy triplet. This allows for a fairly complete characterisation of mean-variance optimal portfolios under conic constraints.
APA, Harvard, Vancouver, ISO, and other styles
43

Kitthamkersorn, Songyot. "Modeling Overlapping and Heterogeneous Perception Variance in Stochastic User Equilibrium Problem with Weibit Route Choice Model." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1970.

Full text
Abstract:
In this study, a new SUE model using the Weibull random error terms is proposed as an alternative to overcome the drawbacks of the multinomial logit (MNL) SUE model. A path-size weibit (PSW) model is developed to relax both independently and identically distributed assumptions, while retaining an analytical closed-form solution. Specifically, this route choice model handles route overlapping through the path-size factor and captures the route-specific perception variance through the Weibull distributed random error terms. Both constrained entropy-type and unconstrained equivalent MP formulations for the PSW-SUE are provided. In addition, model extensions to consider the demand elasticity and combined travel choice of the PSW-SUE model are also provided. Unlike the logit-based model, these model extensions incorporate the logarithmic expected perceived travel cost as the network level of service to determine the demand elasticity and travel choice. Qualitative properties of these minimization programs are given to establish equivalency and uniqueness conditions. Both path-based and link-based algorithms are developed for solving the proposed MP formulations. Numerical examples show that the proposed models can produce a compatible traffic flow pattern compared to the multinomial probit (MNP) SUE model, and these models can be implemented in a real-world transportation network.
APA, Harvard, Vancouver, ISO, and other styles
44

Matoti, Lundi. "Building a statistical linear factor model and a global minimum variance portfolio using estimated covariance matrices." Master's thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/4909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yu-Chen, and 陳佑賑. "A Passive Portfolio Model- Mean Variance and Semi-variance." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/33375689823555964298.

Full text
Abstract:
碩士
靜宜大學
資訊碩士在職專班
102
Since researches have indicated that actively managed portfolios fail to beat the market, index investing, such as index funds and ETFs, which aim to track the market performance, and require few efforts on stock-picking and market-timing is more and more popular among the investors. Index investing, which aims to track the benchmark index return, has been one of the most popular financial tools and the research topics among the academic and the practitioners. However, there have been few studies on the constructing an effective index portfolio. The problems for existing models are tremendous monitoring expenses as well as the downside risk issues. This study aims to address these two issues. We propose a new model that takes account of downside risk and the number of stocks. Huge stocks historical data are stored in a database and given meaning using our model. Stocks that possess the feature of effectiveness are chosen and then given weights based on the optimum theory. The results show that our proposed model provides a new way of constructing an index portfolio, which provides implications for both the academic and the practitioners.
APA, Harvard, Vancouver, ISO, and other styles
46

Shu-hui, Wu, and 吳淑惠. "Inference of Genetic Variance of QTL via Variance-Component Model." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/52712860660694582981.

Full text
Abstract:
碩士
國立臺灣大學
流行病學研究所
90
The variance-component model is considered in this thesis for the quantitative trait. Specifically, the model is comprised of a polymorphic single major gene, polygenic effect, and random environmental effect. It is worth noticing that the asymptotic distribution of MLE may not follow the standard normality assumptions, and the estimate itself sometimes falls in the negative region. In contrast to the conventional maximum likelihood estimation, the Bayesian approach is used for statistical analysis. The inference based on the posterior distribution and posterior samples of the parameters of interest, particularly the additive variance of the single major gene, is derived via Markov Chain Monte Carlo method using WinBUGS1.3. Simulations conducted to compare the MLE and Bayesian estimation show that the Bayesian estimate, the posterior mode, is more accurate than MLE. The posterior variance is also smaller than that of MLE. The procedure for testing the linkage using Bayesian approach will be outlined and discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

"Variance function estimation in nonparametric regression model." UNIVERSITY OF PENNSYLVANIA, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3328698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Yun-Ru, and 黃韻如. "A Study of Performance Variances of Taiwanese Firms in Mainland China: Using Variance Component Analysis, Hierarchical Linear Model, and Analysis of Variance Method." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/d8x7kp.

Full text
Abstract:
碩士
國立臺灣科技大學
企業管理系
105
Based on Industry Organization Theory, Resource-View of Firm and Institutional Theory as our theoretical background, the purpose of this paper is to applied Variance Component Analysis (VCA), Hierarchical Linear Model (HLM), and Analysis of Variance Method (ANOVA) to identify the source of performance variances among Taiwanese firm in Mainland China. Under all performance variables (including ROS, ROA, and ROE), firm effects were found to explain 13.93 to 47.69 percent of variances among Taiwan firms performance in Mainland China. Industry effects accounted for 0 to 15.76 percent of performance differences. Corporate effects accounted for 7 percent of performance variances. Region effects were found to range from 0 to 2.90 percent influence. Year effects are only 0.7 percent. The main finding of this study was that firms’ performance is decided by their own specific, idiosyncratic resource and competences. Besides, region effects have clearly impact on the performance of Taiwanese firms in Mainland China due to the different environment and resource endowment.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Hsin-I., and 李欣怡. "Conformance Proportions in a Normal Variance Components Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/83974003574935133756.

Full text
Abstract:
博士
國立臺灣大學
農藝學研究所
100
Conformance proportion is defined as the proportion of a performance characteristic of interest that falls within a prespecified acceptance region. It can be used not only in manufacture industry but also in agricultural management or environmental monitoring. For instance, determining best harvest timing for forage maize under an appropriate range of dry matter content, monitoring the sweetness of fruits to be above a lower limit, or requiring the concentration of a toxin to be below an upper limit in pesticide residue tests. It is of desire to estimate the probability that a random variable exceeds a specification limit or falls into a specification region, which is essentially the conformance proportion. In this dissertation, we propose the approach of a conformance proportion as an alternative to that of a tolerance interval for practical use. First, we discuss the connections between the two approaches. Then, two methods are developed for computing confidence limits for bilateral conformance proportions, one is based on the concept of a generalized pivotal quantity and the other is based on the modified large sample method. For unilateral conformance proportions, we also propose two methods for interval estimation, the first one is also based on the concept of a generalized pivotal quantity and the second one is based on the Student’s t distribution. A bootstrap calibration approach is adapted for both bilateral and unilateral conformance proportions to have empirical coverage probability sufficiently close to the nominal level. Furthermore, we consider the situations with unbalanced data scenarios. Some examples are given to illustrate the proposed methods. The performances of these approaches are evaluated by detailed statistical simulation studies, showing that they can be recommended for practical use.
APA, Harvard, Vancouver, ISO, and other styles
50

賴珮萱. "Model-implied Jump Variance and Expected Market Return." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/xdvxre.

Full text
Abstract:
碩士
東吳大學
財務工程與精算數學系
106
This study uses S&P 500 index returns from January 1996 to December 2016. The asset price process follows GARCH-Jump model and the jump component is pretended to be normal inverse Gaussian (NIG) distribution. We make use of the particle filter method to estimate the parameters of our model and then calculate the model-implied total variance (MTV), model-implied normal variance (MNV), and model-implied jump variance (MJV), respectively. We find that there is positive significantly prediction from four months to twelve months with MTV only, and MNV only, respectively. MJV positive predictions for 12 months were significant. After MNV and MJV jointly, MNV has positive significantly relation with future market returns from four months to twelve months. Our conclusion is that the ability to predict total variation occurs in the jump part.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography