To see the other types of publications on this topic, follow the link: And shrinkage estimator (SE).

Dissertations / Theses on the topic 'And shrinkage estimator (SE)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'And shrinkage estimator (SE).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hoque, Zahirul. "Improved estimation for linear models under different loss functions." University of Southern Queensland, Faculty of Sciences, 2004. http://eprints.usq.edu.au/archive/00001438/.

Full text
Abstract:
This thesis investigates improved estimators of the parameters of the linear regression models with normal errors, under sample and non-sample prior information about the value of the parameters. The estimators considered are the unrestricted estimator (UE), restricted estimator (RE), shrinkage restricted estimator (SRE), preliminary test estimator (PTE), shrinkage preliminary test estimator (SPTE), and shrinkage estimator (SE). The performances of the estimators are investigated with respect to bias, squared error and linex loss. For the analyses of the risk functions of the estimators, analytical, graphical and numerical procedures are adopted. In Part I the SRE, SPTE and SE of the slope and intercept parameters of the simple linear regression model are considered. The performances of the estimators are investigated with respect to their biases and mean square errors. The efficiencies of the SRE, SPTE and SE relative to the UE are obtained. It is revealed that under certain conditions, SE outperforms the other estimators considered in this thesis. In Part II in addition to the likelihood ratio (LR) test, the Wald (W) and Lagrange multiplier (LM) tests are used to define the SPTE and SE of the parameter vector of the multiple linear regression model with normal errors. Moreover, the modified and size-corrected W, LR and LM tests are used in the definition of SPTE. It is revealed that a great deal of conflict exists among the quadratic biases (QB) and quadratic risks (QR) of the SPTEs under the three original tests. The use of the modified tests reduces the conflict among the QRs, but not among the QBs. However, the use of the size-corrected tests in the definition of the SPTE almost eliminates the conflict among both QBs and QRs. It is also revealed that there is a great deal of conflict among the performances of the SEs when the three original tests are used as the preliminary test statistics. With respect to quadratic bias, the W test statistic based SE outperforms that based on the LR and LM test statistics. However, with respect to the QR criterion, the LM test statistic based SE outperforms the W and LM test statistics based SEs, under certain conditions. In Part III the performance of the PTE of the slope parameter of the simple linear regression model is investigated under the linex loss function. This is motivated by increasing criticism of the squared error loss function for its inappropriateness in many real life situations where underestimation of a parameter is more serious than its overestimation or vice-versa. It is revealed that under the linex loss function the PTE outperforms the UE if the nonsample prior information about the value of the parameter is not too far from its true value. Like the linex loss function, the risk function of the PTE is also asymmetric. However, if the magnitude of the scale parameter of the linex loss is very small, the risk of the PTE is nearly symmetric.
APA, Harvard, Vancouver, ISO, and other styles
2

Mahdi, Tahir Naweed. "Shrinkage estimation in prediction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq30515.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Tae-Hwan. "The shrinkage least absolute deviation estimator in large samples and its application to the Treynor-Black model /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9901433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chan, Tsz-hin, and 陳子軒. "Hybrid bootstrap procedures for shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48521826.

Full text
Abstract:
In statistical inference, one is often interested in estimating the distribution of a root, which is a function of the data and the parameters only. Knowledge of the distribution of a root is useful for inference problems such as hypothesis testing and the construction of a confidence set. Shrinkage-type estimators have become popular in statistical inference due to their smaller mean squared errors. In this thesis, the performance of different bootstrap methods is investigated for estimating the distributions of roots which are constructed based on shrinkage estimators. Focus is on two shrinkage estimation problems, namely the James-Stein estimation and the model selection problem in simple linear regression. A hybrid bootstrap procedure and a bootstrap test method are proposed to estimate the distributions of the roots of interest. In the two shrinkage problems, the asymptotic errors of the traditional n-out-of-n bootstrap, m-out-of-n bootstrap and the proposed methods are derived under a moving parameter framework. The problem of the lack of uniform consistency of the n-out-of-n and the m-out-of-n bootstraps is exposed. It is shown that the proposed methods have better overall performance, in the sense that they yield improved convergence rates over almost the whole range of possible values of the underlying parameters. Simulation studies are carried out to illustrate the theoretical findings.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
5

Remenyi, Norbert. "Contributions to Bayesian wavelet shrinkage." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45898.

Full text
Abstract:
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
APA, Harvard, Vancouver, ISO, and other styles
6

Vumbukani, Bokang C. "Comparison of ridge and other shrinkage estimation techniques." Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/4364.

Full text
Abstract:
Includes bibliographical references.
Shrinkage estimation is an increasingly popular class of biased parameter estimation techniques, vital when the columns of the matrix of independent variables X exhibit dependencies or near dependencies. These dependencies often lead to serious problems in least squares estimation: inflated variances and mean squared errors of estimates unstable coefficients, imprecision and improper estimation. Shrinkage methods allow for a little bias and at the same time introduce smaller mean squared error and variances for the biased estimators, compared to those of unbiased estimators. However, shrinkage methods are based on the shrinkage factor, of which estimation depends on the unknown values, often computed from the OLS solution. We argue that the instability of OLS estimates may have an adverse effect on performance of shrinkage estimators. Hence a new method for estimating the shrinkage factors is proposed and applied on ridge and generalized ridge regression. We propose that the new shrinkage factors should be based on the principal components instead of the unstable OLS estimates.
APA, Harvard, Vancouver, ISO, and other styles
7

Mergel, Victor. "Divergence loss for shrinkage estimation, prediction and prior selection." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ullah, Bashir. "Some contributions to positive part shrinkage estimation in various models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ30263.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Tianyuan, and 唐田园. "On uniform consistency of confidence regions based on shrinkage-type estimators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47152035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Serra, Puertas Jorge. "Shrinkage corrections of sample linear estimators in the small sample size regime." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/404386.

Full text
Abstract:
We are living in a data deluge era where the dimensionality of the data gathered by inexpensive sensors is growing at a fast pace, whereas the availability of independent samples of the observed data is limited. Thus, classical statistical inference methods relying on the assumption that the sample size is large, compared to the observation dimension, are suffering a severe performance degradation. Within this context, this thesis focus on a popular problem in signal processing, the estimation of a parameter, observed through a linear model. This inference is commonly based on a linear filtering of the data. For instance, beamforming in array signal processing, where a spatial filter steers the beampattern of the antenna array towards a direction to obtain the signal of interest (SOI). In signal processing the design of the optimal filters relies on the optimization of performance measures such as the Mean Square Error (MSE) and the Signal to Interference plus Noise Ratio (SINR). When the first two moments of the SOI are known, the optimization of the MSE leads to the Linear Minimum Mean Square Error (LMMSE). When such statistical information is not available one may force a no distortion constraint towards the SOI in the optimization of the MSE, which is equivalent to maximize the SINR. This leads to the Minimum Variance Distortionless Response (MVDR) method. The LMMSE and MVDR are optimal, though unrealizable in general, since they depend on the inverse of the data correlation, which is not known. The common approach to circumvent this problem is to substitute it for the inverse of the sample correlation matrix (SCM), leading to the sample LMMSE and sample MVDR. This approach is optimal when the number of available statistical samples tends to infinity for a fixed observation dimension. This large sample size scenario hardly holds in practice and the sample methods undergo large performance degradations in the small sample size regime, which may be due to short stationarity constraints or to a system with a high observation dimension. The aim of this thesis is to propose corrections of sample estimators, such as the sample LMMSE and MVDR, to circumvent their performance degradation in the small sample size regime. To this end, two powerful tools are used, shrinkage estimation and random matrix theory (RMT). Shrinkage estimation introduces a structure on the filters that forces some corrections in small sample size situations. They improve sample based estimators by optimizing a bias variance tradeoff. As direct optimization of these shrinkage methods leads to unrealizable estimators, then a consistent estimate of these optimal shrinkage estimators is obtained, within the general asymptotics where both the observation dimension and the sample size tend to infinity, but at a fixed rate. That is, RMT is used to obtain consistent estimates within an asymptotic regime that deals naturally with the small sample size. This RMT approach does not require any assumptions about the distribution of the observations. The proposed filters deal directly with the estimation of the SOI, which leads to performance gains compared to related work methods based on optimizing a metric related to the data covariance estimate or proposing rather ad-hoc regularizations of the SCM. Compared to related work methods which also treat directly the estimation of the SOI and which are based on a shrinkage of the SCM, the proposed filter structure is more general. It contemplates corrections of the inverse of the SCM and considers the related work methods as particular cases. This leads to performance gains which are notable when there is a mismatch in the signature vector of the SOI. This mismatch and the small sample size are the main sources of degradation of the sample LMMSE and MVDR. Thus, in the last part of this thesis, unlike the previous proposed filters and the related work, we propose a filter which treats directly both sources of degradation.
Estamos viviendo en una era en la que la dimensión de los datos, recogidos por sensores de bajo precio, está creciendo a un ritmo elevado, pero la disponibilidad de muestras estadísticamente independientes de los datos es limitada. Así, los métodos clásicos de inferencia estadística sufren una degradación importante, ya que asumen un tamaño muestral grande comparado con la dimensión de los datos. En este contexto, esta tesis se centra en un problema popular en procesado de señal, la estimación lineal de un parámetro observado mediante un modelo lineal. Por ejemplo, la conformación de haz en procesado de agrupaciones de antenas, donde un filtro enfoca el haz hacia una dirección para obtener la señal asociada a una fuente de interés (SOI). El diseño de los filtros óptimos se basa en optimizar una medida de prestación como el error cuadrático medio (MSE) o la relación señal a ruido más interferente (SINR). Cuando hay información sobre los momentos de segundo orden de la SOI, la optimización del MSE lleva a obtener el estimador lineal de mínimo error cuadrático medio (LMMSE). Cuando esa información no está disponible, se puede forzar la restricción de no distorsión de la SOI en la optimización del MSE, que es equivalente a maximizar la SINR. Esto conduce al estimador de Capon (MVDR). El LMMSE y MVDR son óptimos, pero no son realizables, ya que dependen de la inversa de la matriz de correlación de los datos, que no es conocida. El procedimiento habitual para solventar este problema es sustituirla por la inversa de la correlación muestral (SCM), esto lleva al LMMSE y MVDR muestral. Este procedimiento es óptimo cuando el tamaño muestral tiende a infinito y la dimensión de los datos es fija. En la práctica este tamaño muestral elevado no suele producirse y los métodos LMMSE y MVDR muestrales sufren una degradación importante en este régimen de tamaño muestral pequeño. Éste se puede deber a periodos cortos de estacionariedad estadística o a sistemas cuya dimensión sea elevada. El objetivo de esta tesis es proponer correcciones de los estimadores LMMSE y MVDR muestrales que permitan combatir su degradación en el régimen de tamaño muestral pequeño. Para ello se usan dos herramientas potentes, la estimación shrinkage y la teoría de matrices aleatorias (RMT). La estimación shrinkage introduce una estructura de los estimadores que mejora los estimadores muestrales mediante la optimización del compromiso entre media y varianza del estimador. La optimización directa de los métodos shrinkage lleva a métodos no realizables. Por eso luego se propone obtener una estimación consistente de ellos en el régimen asintótico en el que tanto la dimensión de los datos como el tamaño muestral tienden a infinito, pero manteniendo un ratio constante. Es decir RMT se usa para obtener estimaciones consistentes en un régimen asintótico que trata naturalmente las situaciones de tamaño muestral pequeño. Esta metodología basada en RMT no requiere suposiciones sobre el tipo de distribución de los datos. Los filtros propuestos tratan directamente la estimación de la SOI, esto lleva a ganancias de prestaciones en comparación a otros métodos basados en optimizar una métrica relacionada con la estimación de la covarianza de los datos o regularizaciones ad hoc de la SCM. La estructura de filtro propuesta es más general que otros métodos que también tratan directamente la estimación de la SOI y que se basan en un shrinkage de la SCM. Contemplamos correcciones de la inversa de la SCM y los métodos del estado del arte son casos particulares. Esto lleva a ganancias de prestaciones que son notables cuando hay una incertidumbre en el vector de firma asociado a la SOI. Esa incertidumbre y el tamaño muestral pequeño son las fuentes de degradación de los LMMSE y MVDR muestrales. Así, en la última parte de la tesis, a diferencia de métodos propuestos previamente en la tesis y en la literatura, se propone un filtro que trata de forma directa ambas fuentes de degradación.
APA, Harvard, Vancouver, ISO, and other styles
11

Vu, Anh Tuan Eric. "La modélisation du risque en immobilier d'entreprise." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090016.

Full text
Abstract:
L’immobilier est un actif récalcitrant, hétérogène et illiquide, ces incertitudes constituent l`appréhension du risque en immobilier d`entreprise. Nous suggérons que le risque peut être évaluer à travers une somme de mesure de risque : en premier lieu dans une approche globale de la volatilité, ce que peut nous proposer une analyse de portefeuille, puis dans une approche plus fine, que peut nous donner la prime de risque d`un marché bureau. Notre travail doctoral se propose d’adapter les outils hérités du monde financier à l’évaluation du risque dans les principaux marchés de bureau Européen. Notre thèse sera rédigée en anglais et la question de recherche s`articulera autour de trois grands axes que nous illustrons sous forme d’articles
The real estate asset class is tangible, heterogeneous and illiquid. It gives a specific investment universe that needs to be understood by investors, because the uncertainties created by this universe compose the risk of real estate investment. We suggest modelling risks across a sum of risk unit appraisal, on one hand, in constructing portfolio analysis, and on the other hand, through the office market risk premium modelling. Our doctoral study proposes to adapt financial theorems to risk modelling in the main European office markets. Our thesis will be written in Englishand its body will be articulated around three axes whereby those will be illustrated under the form of article
APA, Harvard, Vancouver, ISO, and other styles
12

Ali, Abdunnabi M. Carleton University Dissertation Mathematics. "Interface of preliminary test approach and empirical Bayes approach to shrinkage estimation." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Qilin. "Autocorrelation-based factor analysis and nonlinear shrinkage estimation of large integrated covariance matrix." Thesis, London School of Economics and Political Science (University of London), 2016. http://etheses.lse.ac.uk/3551/.

Full text
Abstract:
The first part of my thesis deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. we allow the dimension of time series N to be as large as, or even larger than the sample size of the time series. The estimation of the factor loading matrix and subsequently the factors are done via an eigenanalysis on a non-negative definite matrix constructed from autocorrelation matrix. The method is dubbed as AFA. We give explicit comparison of the convergence rates between AFA with PCA. We show that AFA possesses the advantage over PCA when dealing with small dimension time series for both one step and two step estimations, while at large dimension, the performance is still comparable. The second part of my thesis considers large integrated covariance matrix estimation. While the use of intra-day price data increases the sample size substantially for asset allocation, the usual realized covariance matrix still suffers from bias contributed from the extreme eigenvalues when the number of assets is large. We introduce a novel nonlinear shrinkage estimator for the integrated volatility matrix which shrinks the extreme eigenvalues of a realized covariance matrix back to acceptable level, and enjoys a certain asymptotic efficiency at the same time, all at a high dimensional setting where the number of assets can have the same order as the number of data points. Compared to a time-variation adjusted realized covariance estimator and the usual realized covariance matrix, our estimator demonstrates favorable performance in both simulations and a real data analysis in portfolio allocation. This include a novel maximum exposure bound and an actual risk bound when our estimator is used in constructing the minimum variance portfolio.
APA, Harvard, Vancouver, ISO, and other styles
14

KWON, YEIL. "NONPARAMETRIC EMPIRICAL BAYES SIMULTANEOUS ESTIMATION FOR MULTIPLE VARIANCES." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/495491.

Full text
Abstract:
Statistics
Ph.D.
The shrinkage estimation has proven to be very useful when dealing with a large number of mean parameters. In this dissertation, we consider the problem of simultaneous estimation of multiple variances and construct a shrinkage type, non-parametric estimator. We take the non-parametric empirical Bayes approach by starting with an arbitrary prior on the variances. Under an invariant loss function, the resultant Bayes estimator relies on the marginal cumulative distribution function of the sample variances. Replacing the marginal cdf by the empirical distribution function, we obtain a Non-parametric Empirical Bayes estimator for multiple Variances (NEBV). The proposed estimator converges to the corresponding Bayes version uniformly over a large set. Consequently, the NEBV works well in a post-selection setting. We then apply the NEBV to construct condence intervals for mean parameters in a post-selection setting. It is shown that the intervals based on the NEBV are shortest among all the intervals which guarantee a desired coverage probability. Through real data analysis, we have further shown that the NEBV based intervals lead to the smallest number of discordances, a desirable property when we are faced with the current "replication crisis".
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
15

Dickinson, Charles R. "Refinement and extension of shrinkage techniques in loss rate estimation of Marine Corps officer manpower models/." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23375.

Full text
Abstract:
This thesis is a continuation of previous work to apply modern multiparameter estimation techniques to the problem of estimating attrition rates for a large number of small inventory cells in manpower planning models used by the U.S. Marine Corps. The main advances involve the promising introduction of empirical Bayes (non-constant shrinkage) techniques, recognition of the non symmetric nature of the errors with a response to this, and some insight into all aggregation plans that should help provide greater stability for the estimation methods. In addition, the roles of some middle level methodological choices are explored. Keywords: Theses; Miller transformation inverse
APA, Harvard, Vancouver, ISO, and other styles
16

Misiewicz, John M. "Extension of aggregation and shrinkage techniques used in the estimation of Marine Corps Officer attrition rates." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dick, Artur [Verfasser], Udo [Akademischer Betreuer] Kamps, and Maria [Akademischer Betreuer] Kateri. "Shrinkage estimation in parametric families of distributions based on divergence measures / Artur Dick ; Udo Kamps, Maria Kateri." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1181193192/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Som, Agniva. "Paradoxes and Priors in Bayesian Regression." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406197897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Huang, Zhengyan. "Differential Abundance and Clustering Analysis with Empirical Bayes Shrinkage Estimation of Variance (DASEV) for Proteomics and Metabolomics Data." UKnowledge, 2019. https://uknowledge.uky.edu/epb_etds/24.

Full text
Abstract:
Mass spectrometry (MS) is widely used for proteomic and metabolomic profiling of biological samples. Data obtained by MS are often zero-inflated. Those zero values are called point mass values (PMVs). Zero values can be further grouped into biological PMVs and technical PMVs. The former type is caused by the absence of components and the latter type is caused by detection limit. There is no simple solution to separate those two types of PMVs. Mixture models were developed to separate the two types of zeros apart and to perform the differential abundance analysis. However, we notice that the mixture model can be unstable when the number of non-zero values is small. In this dissertation, we propose a new differential abundance (DA) analysis method, DASEV, which applies an empirical Bayes shrinkage estimation on variance. We hypothesized that performance on variance estimation could be more robust and thus enhance the accuracy of differential abundance analysis. Disregarding the issue the mixture models have, the method has shown promising strategies to separate two types of PMVs. We adapted the mixture distribution proposed in the original mixture model design and assumed that the variances for all components follow a certain distribution. We proposed to calculate the estimated variances by borrowing information from other components via applying the assumed distribution of variance, and then re-estimate other parameters using the estimated variances. We obtained better and more stable estimations on variance, means abundances, and proportions of biological PMVs, especially where the proportion of zeros is large. Therefore, the proposed method achieved obvious improvements in DA analysis. We also propose to extend the method for clustering analysis. To our knowledge, commonly used cluster methods for MS omics data are only K-means and Hierarchical. Both methods have their own limitations while being applied to the zero-inflated data. Model-based clustering methods are widely used by researchers for various data types including zero-inflated data. We propose to use the extension (DASEV.C) as a model-based cluster method. We compared the clustering performance of DASEV.C with K-means and Hierarchical. Under certain scenarios, the proposed method returned more accurate clusters than the standard methods. We also develop an R package dasev for the proposed methods presented in this dissertation. The major functions DASEV.DA and DASEV.C in this R package aim to implement the Bayes shrinkage estimation on variance then conduct the differential abundance and cluster analysis. We designed the functions to allow the flexibility for researchers to specify certain input options.
APA, Harvard, Vancouver, ISO, and other styles
20

Delaney, James Dillon. "Contributions to the Analysis of Experiments Using Empirical Bayes Techniques." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11590.

Full text
Abstract:
Specifying a prior distribution for the large number of parameters in the linear statistical model is a difficult step in the Bayesian approach to the design and analysis of experiments. Here we address this difficulty by proposing the use of functional priors and then by working out important details for three and higher level experiments. One of the challenges presented by higher level experiments is that a factor can be either qualitative or quantitative. We propose appropriate correlation functions and coding schemes so that the prior distribution is simple and the results easily interpretable. The prior incorporates well known experimental design principles such as effect hierarchy and effect heredity, which helps to automatically resolve the aliasing problems experienced in fractional designs. The second part of the thesis focuses on the analysis of optimization experiments. Not uncommon are designed experiments with their primary purpose being to determine optimal settings for all of the factors in some predetermined set. Here we distinguish between the two concepts of statistical significance and practical significance. We perform estimation via an empirical Bayes data analysis methodology that has been detailed in the recent literature. But then propose an alternative to the usual next step in determining optimal factor level settings. Instead of implementing variable or model selection techniques, we propose an objective function that assists in our goal of finding the ideal settings for all factors over which we experimented. The usefulness of the new approach is illustrated through the analysis of some real experiments as well as simulation.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Wenjie. "Estimation and bias correction of the magnitude of an abrupt level shift." Thesis, Linköpings universitet, Statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84618.

Full text
Abstract:
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yafei. "Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Management (CALM) Model Within the Markowitz Framework." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/790.

Full text
Abstract:
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer re ect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio's aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
APA, Harvard, Vancouver, ISO, and other styles
23

Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.

Full text
Abstract:
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
APA, Harvard, Vancouver, ISO, and other styles
24

Kastner, Gregor. "Sparse Bayesian Time-Varying Covariance Estimation in Many Dimensions." WU Vienna University of Economics and Business, 2016. http://epub.wu.ac.at/5172/1/resreport129.pdf.

Full text
Abstract:
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality. This renders parsimonious estimation methods essential for conducting reliable statistical inference. In this paper, the issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we apply a Normal-Gamma prior to the elements of the factor loadings matrix. This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero, thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate the effectiveness of the shrinkage prior to obtain sparse loadings matrices and more precise correlation estimates. Moreover, we investigate predictive performance and discuss different choices for the number of latent factors. Additionally to being a stand-alone tool, the algorithm is designed to act as a "plug and play" extension for other MCMC samplers; it is implemented in the R package factorstochvol. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
25

Hösthagen, Anders. "Thermal Crack Risk Estimation and Material Properties of Young Concrete." Licentiate thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65495.

Full text
Abstract:
This thesis presents how to establish a theoretical model to predict risk of thermal cracking in young concrete when cast on ground or an arbitrary construction. The crack risk in young concrete is determined in two steps: 1) calculation of temperature distribution within newly cast concrete and adjacent structure; 2) calculation of stresses caused by thermal and moisture (due to self-desiccation, if drying shrinkage not included) changes in the analyzed structure. If the stress reaches the tensile strength of the young concrete, one or several cracks will occur. The main focus of this work is how to establish a theoretical model denoted Equivalent Restraint Method model, ERM, and the correlation between ERM models and empirical experiences. A key factor in these kind of calculations is how to model the restraint from any adjacent construction part or adjoining restraining block of any type. The building of a road tunnel and a railway tunnel has been studied to collect temperature measurements and crack patterns from the first object, and temperature and thermal dilation measurements from the second object, respectively. These measurements and observed cracks were compared to the theoretical calculations to determine the level of agreement between empirical and theoretical results. Furthermore, this work describes how to obtain a set of fully tested material parameters at CompLAB (test laboratory at Luleå University of Technology, LTU) suitable to be incorporated into the calculation software used. It is of great importance that the obtained material parameters describe the thermal and mechanical properties of the young concrete accurately, in order to perform reliable crack risk calculations.  Therefore, analysis was performed that show how a variation in the evaluated laboratory tests will affect the obtained parameters and what effects it has on calculated thermal stresses.
APA, Harvard, Vancouver, ISO, and other styles
26

Canteri, Laurence. "Transferts et déformations en surface au cours du séchage : estimation de la qualité du matériau bois." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL077N.

Full text
Abstract:
L’objectif de ce travail est de réaliser une synthèse de l'interprétation des phénomènes thermomécaniques se produisant au cours du séchage convectif à haute température du bois massif, intégrant la variabilité inter et intra arbre. Une notion de qualité plus objective que celle introduite jusqu'ici au laboratoire est apportée. Un dispositif fiable de mesures locales et simultanées de pression et température est couplé à une méthode non destructive de mesure des déformations en surface. Lors de séchages de duramen de résineux (pin sylvestre), une absence de première phase est mise en évidence, avec développement de surpressions internes supérieures à celles enregistrées pour l'aubier. Ces surpressions favorisent l'évacuation de l'eau liquide ou gazeuse en absence de migration longitudinale due à la faible perméabilité du bois de duramen. Le séchage d'aubier dans des conditions dures s'apparente au séchage de duramen dans des conditions plus douces. Pour du hêtre, l'observation répétée de chutes brutales de pression, se produisant d'autant plus tôt que les conditions de séchage sont sévères, est attribuée à l'apparition de fissures. L’étude du comportement du matériau montre que les phases de retrait coïncident avec les phases de séchage. Le retrait mesure correspond au retrait de la section auquel s'ajoute des effets locaux dus aux hétérogénéités, aux conditions de séchage et à l'anisotropie. Des hypothèses sur la fissuration en surface sont émises suivant que le milieu reste continu ou non. Elles sont confirmées par la comparaison avec un modèle analytique simple. Deux critères caractérisant la qualité du séchage en fonction des conditions aérothermiques et de l'essence sont proposés puis comparés. Les résultats confirment que le séchage à haute température est acceptable pour les résineux. En revanche, les conclusions sont plus nuancées pour une application au hêtre. Ce procédé de séchage rapide n'est actuellement pas envisageable pour le chêne
APA, Harvard, Vancouver, ISO, and other styles
27

Jonsson, Robin. "Optimal Linear Combinations of Portfolios Subject to Estimation Risk." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Full text
Abstract:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO, and other styles
28

Moradi, Rekabdarkolaee Hossein. "Dimension Reduction and Variable Selection." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4633.

Full text
Abstract:
High-dimensional data are becoming increasingly available as data collection technology advances. Over the last decade, significant developments have been taking place in high-dimensional data analysis, driven primarily by a wide range of applications in many fields such as genomics, signal processing, and environmental studies. Statistical techniques such as dimension reduction and variable selection play important roles in high dimensional data analysis. Sufficient dimension reduction provides a way to find the reduced space of the original space without a parametric model. This method has been widely applied in many scientific fields such as genetics, brain imaging analysis, econometrics, environmental sciences, etc. in recent years. In this dissertation, we worked on three projects. The first one combines local modal regression and Minimum Average Variance Estimation (MAVE) to introduce a robust dimension reduction approach. In addition to being robust to outliers or heavy-tailed distribution, our proposed method has the same convergence rate as the original MAVE. Furthermore, we combine local modal base MAVE with a $L_1$ penalty to select informative covariates in a regression setting. This new approach can exhaustively estimate directions in the regression mean function and select informative covariates simultaneously, while being robust to the existence of possible outliers in the dependent variable. The second project develops sparse adaptive MAVE (saMAVE). SaMAVE has advantages over adaptive LASSO because it extends adaptive LASSO to multi-dimensional and nonlinear settings, without any model assumption, and has advantages over sparse inverse dimension reduction methods in that it does not require any particular probability distribution on \textbf{X}. In addition, saMAVE can exhaustively estimate the dimensions in the conditional mean function. The third project extends the envelope method to multivariate spatial data. The envelope technique is a new version of the classical multivariate linear model. The estimator from envelope asymptotically has less variation compare to the Maximum Likelihood Estimator (MLE). The current envelope methodology is for independent observations. While the assumption of independence is convenient, this does not address the additional complication associated with a spatial correlation. This work extends the idea of the envelope method to cases where independence is an unreasonable assumption, specifically multivariate data from spatially correlated process. This novel approach provides estimates for the parameters of interest with smaller variance compared to maximum likelihood estimator while still being able to capture the spatial structure in the data.
APA, Harvard, Vancouver, ISO, and other styles
29

WANG, QI. "Shrinkage amid Growth." Thesis, KTH, Arkitektur, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298829.

Full text
Abstract:
In general, the project I am going to present has two parts, the first part was finished in my first master year (2020.05) and further improved during past days, while the second part is what I have been focused on during the thesis. together, they worked as a complete circle in the project. From the reality perspective, what caught my attention was the two opposite trends during urban development – growing and shrinking.      -The cities in growth nowadays, especially megacities, are facing huge challenges on housing shortages, transportation pressures, overpopulation issues etc. for centuries, while these problems are still unsolved? Why?     -The cities in shrinkage are in a totally different scenario, there is no housing shortages, no transportation pressures, no overpopulation issues, but why people are still leaving? My answer to these questions is that some parts of our societal system have been ‘unevolved’ for too long, such as the economical system. In this sci-fi project, I imagined the world after a catastrophe will be provided with a chance to reform……
APA, Harvard, Vancouver, ISO, and other styles
30

Zheng, Yong Chu. "Shrinkage behaviour of geopolymers /." Connect to thesis, 2010. http://repository.unimelb.edu.au/10187/7157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kanellopoulos, Antonios. "Autogenous shrinkage of CARDIFRCRTM." Thesis, Cardiff University, 2004. http://orca.cf.ac.uk/55928/.

Full text
Abstract:
Durability requirements have become a major issue in the design of concrete structures today. The hardening process plays a key role in the quality of the concrete. Autogenous shrinkage is considered to be a factor that may cause damage to the concrete structure during hardening. The concept of autogenous shrinkage is relatively new and in the case of conventional concrete with fairly high water to cement ratios these self-induced volume changes are found to be relatively small and therefore are neglected. Self-desiccation and autogenous shrinkage are pronounced phenomena in the case of low water to cement ratio concretes. The current study deals with the development of autogenous shrinkage strains in a new class of High Performance Fibre Reinforced Cementitious Composites (HPFRCCs) designated CARDIFRC that has been recently developed at Cardiff University. The scope of the study was to investigate how the self-induced shrinkage strains develop in CARDIFRC matrix without fibres and what was the effect of the inclusion of a large amount of fibre on the autogenous shrinkage. Both experimental and theoretical studies were undertaken as a part of this investigation. Autogenous shrinkage strains were measured on large and small prisms of CARDIFRC under isothermal conditions. The experiments revealed a relatively large scatter in the measured values for the case of large beams with fibres, whereas the beams of same size but without any fibres gave consistent results. This large scatter has been confirmed to be a result of the uneven distribution of fibres in the large prisms. Small prisms with and without fibres gave very consistent results, with autogenous shrinkage taking place up to 75 days. The autogenous shrinkage strains have been modelled using a thermodynamic approach which follows the continuous change in the moisture content, pore volume and stiffness of the mix with degree of hydration. The predictions of the model are in good agreement with the measured strains in all specimens with and without fibres.
APA, Harvard, Vancouver, ISO, and other styles
32

Mokarem, David W. "Development of Concrete Shrinkage Performance Specifications." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/27605.

Full text
Abstract:
During its service life, concrete experiences volume changes. One of the types of deformation experienced by concrete is shrinkage. The four main types of shrinkage associated with concrete are plastic, autogeneous, carbonation and drying shrinkage. The volume changes in concrete due to shrinkage can lead to the cracking of the concrete. In the case of reinforced concrete, the cracking may produce a direct path for chloride ions to reach the reinforcing steel. Once chloride ions reach the steel surface, the steel will corrode, which itself can cause cracking, spalling, and delamination of the concrete. The development of concrete shrinkage performance specifications that limit the amount of drying shrinkage for concrete mixtures typically used by the Virginia Department of Transportation (VDOT) were assessed. Five existing shrinkage prediction models were also assessed to determine the accuracy and precision of each model as it pertains to the VDOT mixtures used in this study. The five models assessed were the ACI 209 Code Model, Bazant B3 Model, CEB90 Code Model, Gardner/Lockman Model, and the Sakata Model. The percentage length change limits for the portland cement concrete mixtures were 0.0300 at 28 days, and 0.0400 at 90 days. For the supplemental cementitious material mixtures, the percentage length change limits were 0.0400 at 28 days, and 0.0500 at 90 days. The CEB90 Code model performed best for the portland cement concrete mixtures, while the Gardner/Lockman Model performed best for the supplemental cementitious material mixtures.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Mokarem, David Wayne. "Development of Concrete Shrinkage Performance Specifications." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/27605.

Full text
Abstract:
During its service life, concrete experiences volume changes. One of the types of deformation experienced by concrete is shrinkage. The four main types of shrinkage associated with concrete are plastic, autogeneous, carbonation and drying shrinkage. The volume changes in concrete due to shrinkage can lead to the cracking of the concrete. In the case of reinforced concrete, the cracking may produce a direct path for chloride ions to reach the reinforcing steel. Once chloride ions reach the steel surface, the steel will corrode, which itself can cause cracking, spalling, and delamination of the concrete. The development of concrete shrinkage performance specifications that limit the amount of drying shrinkage for concrete mixtures typically used by the Virginia Department of Transportation (VDOT) were assessed. Five existing shrinkage prediction models were also assessed to determine the accuracy and precision of each model as it pertains to the VDOT mixtures used in this study. The five models assessed were the ACI 209 Code Model, Bazant B3 Model, CEB90 Code Model, Gardner/Lockman Model, and the Sakata Model. The percentage length change limits for the portland cement concrete mixtures were 0.0300 at 28 days, and 0.0400 at 90 days. For the supplemental cementitious material mixtures, the percentage length change limits were 0.0400 at 28 days, and 0.0500 at 90 days. The CEB90 Code model performed best for the portland cement concrete mixtures, while the Gardner/Lockman Model performed best for the supplemental cementitious material mixtures.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Alvaredo, Alejandra Mónica. "Drying shrinkage and crack formation /." Zürich, 1994. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rajayogan, Vinod Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Autogenous shrinkage in cementitious systems." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/44250.

Full text
Abstract:
Autogenous shrinkage is of concern in high performance concrete mixtures, when specific properties like strength and durability are enhanced. Factors like low watercement ratio, low porosity and increased hydration kinetics which are associated with high performance concrete mixtures are also responsible for the development of autogenous shrinkage. With about two decades of research into autogenous shrinkage, uncertainties still exist with testing procedure, effect of supplementary cementitious materials, modelling and prediction of autogenous shrinkage. The primary focus of this study is to understand mechanisms which have been postulated to cause autogenous shrinkage like chemical shrinkage and self desiccation. In addition, this study has considered properties like porosity and internal empty voids in the analysis of the causes of bulk volume deformations of the cementitious paste systems with and without mineral admixtures. The study begins with an experimental investigation of chemical shrinkage in hydrating cementitious paste systems with the addition of fly ash, slag and silica fume using the test method recently accepted by the ASTM. This was followed by the experimental investigation of autogenous shrinkage in cementitious paste. The autogenous shrinkage in paste mixtures is studied from an early age (~1.5 hours after addition of water) for cementitious systems at a water-cementitious ratio of 0.32 (w/c 0.25 for limited mixture proportions). A non-contact measurement method using eddy current sensors were adopted. The hydration mechanism of the cementitious paste systems was then modelled using CEMHYD3D, which is a 3 dimensional numerical modelling method successfully used to study, simulate and present the hydration developments in cementitious systems. Properties like chemical shrinkage, degree of hydration, total porosity and free water content; all of which have been obtained from the CEMHYD3D simulation have been cross correlated with the experimental results in order to more comprehensively understand the mechanism contributing to bulk volume change under sealed conditions. The experimental investigations are extended to study the development in concrete with and without mineral admixtures (i.e., silica fume, fly ash and slag). Self desiccation driving the development of autogenous shrinkage has been used extensively across literature but as an alternative the author has proposed using internal drying factor in modelling autogenous shrinkage. The "internal drying factor" is described as the ratio of the empty voids (due to chemical shrinkage) to the total porosity at any point of time of hydration. Independent of the mixture proportions, a linear trend was observed between the autogenous shrinkage strain and increase in internal drying factor. Thus the internal drying factor could be incorporated into semiempirical models while attempting to predict autogenous shrinkage. An increase in the compressive strength of matured concrete at 1 year had a strong correlation to the observed autogenous shrinkage strains irrespective of the cementitious system. It is believed this could be because of the increase in gel-space ratio which is intern linked to the degree of hydration and porosity of the microstructure. The author has obtained strong evidence that the micro-structural changes associated with high strength and durable concrete have a direct impact on the autogenous shrinkage of concrete. Hence, the author suggests that autogenous shrinkage should be investigated and allowable values be stipulated as design criterion in structures that use high strength-high performance concrete.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Kuo-Chin. "Some topics in wavelet shrinkage /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9720543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Messin, Liam J. "Spatial control of microtubule shrinkage." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/94871/.

Full text
Abstract:
Microtubules are long linear polymers that switch randomly between periods of growth and shrinkage, in a process known as dynamic instability. In vivo, dynamic instability is regulated by microtubule associated proteins (MAPs). One class of MAPS, the kinesins, move actively along microtubules, and some regulate microtubule dynamics. Kinesin-8, a kinesin, regulates microtubule dynamics in a wide range of eukaryotic cells. Schizosaccharomyces pombe (S. pombe) provides a well-characterised system in which to study microtubule regulation by MAPs. During interphase, microtubules grow from the centre of the rod-shaped cell until their plus ends reach and pause at the cell end, before undergoing catastrophe and shrinking. Shrinkage occurs predominantly at cell ends, even as the cell grows longer. I have studied the cell biology of kinesin-8-dependent interphase microtubule dynamics in S. pombe. I have identified an interphase-specific binding partner of S. pombe kinesin-8 (Klp5/Klp6); Mcp1. Mcp1 was required for Klp5/Klp6 accumulation at interphase microtubule plus ends and for Klp5/Klp6 induced interphase microtubule shrinkage. Tea2 (a kinesin) and Tip1 (CLIP170 orthologue) were found to stabilise interphase microtubules. Cells lacking Tea2 or Tip1 displayed interphase microtubules which, after reaching cell ends, underwent shrinkage sooner than wild type cells. Cells lacking Klp5/Klp6 or Mcp1 showed the opposite phenotype, microtubules which dwelt at cell ends longer than control cells before shrinking. Klp5/Klp6 accumulation on interphase microtubule plus ends steadily increased, peaking just before microtubule shrinkage. In contrast, Tea2 accumulated rapidly to newly nucleated interphase microtubule plus ends and was lost before microtubule shrinkage. I propose a model in which Tea2 prevents Klp5/Klp6 induced microtubule shrinkage until the interphase microtubule has grown to the cell end, where Tea2 is lost. At the cell end Klp5/Klp6 now induce shrinkage.
APA, Harvard, Vancouver, ISO, and other styles
38

Wiedenbeck, Janice K. "Shrinkage characteristics of lodgepole pine." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53198.

Full text
Abstract:
This study examined shrinkage and related characteristics of two North American varieties of lodgepole pine: Pinus contorta var. latifolia and Pinus contorta var. murrayana, sampled at 10% of tree height. For var. murrayana, size was the only factor that had a significant effect on specific gravity; specific gravity decreased with increasing tree diameter. For var. Iatifolia, latitude was the only factor that had a significant effect on specific gravity; in general, specific gravity increased with increasing latitude. Conversely, specific gravity had a significant effect on radial shrinkage, the radial shrinkage tangential shrinkage ratio, and volumetric shrinkage for both varieties. The analysis of variance procedure indicated that the factors size, latitude, and elevation had no effect on the shrinkage of var. Iatifolia. However, for var. murrayana, radial shrinkage was affected by both tree size and latitude. Tangential shrinkage was also affected by latitude (increasing with increasing latitude). Linear correlations between radial shrinkage and growth rate, longitudinal shrinkage and distance I from the pith (a negative relationship), and specific gravity and growth rate were highly significant for both varieties. For var. Iatifolia, the linear association between specific gravity and heartwood percent was also significant. For var. murrayana, no difference in shrinkage or specific gravity was detected between the heartwood and sapwood. For var. Iatifolia, heartwood shrank less radially and had a lower specific gravity than sapwood. A comparison of the two varieties at their common latitudes indicated that murrayana trees have both higher specific gravity and shrinkage than do Iatifolia trees of the same size.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Huber, Florian, and Thomas Zörner. "Threshold cointegration and adaptive shrinkage." WU Vienna University of Economics and Business, 2017. http://epub.wu.ac.at/5577/1/wp250.pdf.

Full text
Abstract:
This paper considers Bayesian estimation of the threshold vector error correction (TVECM) model in moderate to large dimensions. Using the lagged cointegrating error as a threshold variable gives rise to additional difficulties that are typically solved by relying on large sample approximations. Relying on Markov chain Monte Carlo methods we circumvent these issues by avoiding computationally prohibitive estimation strategies like the grid search. Due to the proliferation of parameters we use novel global-local shrinkage priors in the spirit of Griffin and Brown (2010). We illustrate the merits of our approach in an application to five exchange rates vis-á-vis the US dollar and assess whether a given currency is over or undervalued. Moreover, we perform a forecasting comparison to investigate whether it pays off to adopt a non-linear modeling approach relative to a set of simpler benchmark models.
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
40

El-Baden, Ali Said Ahmed. "Shrinkage of high strength concrete." Thesis, Cardiff University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sayahi, Faez. "Plastic Shrinkage Cracking in Concrete." Licentiate thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-133.

Full text
Abstract:
Early-age (up to 24 hours after casting) cracking may become problematic in any concrete structure. It can damage the aesthetics of the concrete member and decrease the durability and serviceability by facilitating the ingress of harmful material. Moreover, these cracks may expand gradually during the member’s service-life due to long-term shrinkage and/or loading. Early-age cracking is caused by two driving forces: 1) plastic shrinkage cracking which is a physical phenomenon and occurs due to rapid and excessive loss of moisture, mainly in form of evaporation, 2) chemical reactions between cement and water which causes autogenous shrinkage. In this PhD project only the former is investigated. Rapid evaporation from the surface of fresh concrete causes negative pressure in the pore system. This pressure, known as capillary pressure, pulls the solid particles together and decreases the inter-particle distances, causing the whole concrete element to shrink. If this shrinkage is hindered in any way, cracking may commence. The phenomenon occurs shortly after casting the concrete, while it is still in the plastic stage (up to around 8 hours after placement), and is mainly observed in concrete elements with high surface to volume ratio such as slabs and pavements. Many parameters may affect the probability of plastic shrinkage cracking. Among others, effect of water/cement ratio, fines, admixtures, geometry of the element, ambient conditions (i.e. temperature, relative humidity, wind velocity and solar radiation), etc. has been investigated in previous studies. In this PhD project at Luleå University of Technology (LTU), in addition to studying the influence of various parameters, effort is made to reach a better and more comprehensive understanding about the cracking governing mechanism. Evaporation, capillary pressure development and hydration rate are particularly investigated in order to define their relationship. This project started with intensive literature study which is summarized in Papers I and II. Then, the main objective was set upon which series of experiments were defined. The utilized methods, material, investigated parameters and results are presented in Papers III and IV. It has been so far observed that evaporation is not the only driving force behind the plastic shrinkage cracking. Instead a correlation between evaporation, rate of capillary pressure development and the duration of dormant period governs the phenomenon. According to the results, if rapid evaporation is accompanied by faster capillary pressure development in the pore system and slower hydration, risk of plastic shrinkage cracking increases significantly.
APA, Harvard, Vancouver, ISO, and other styles
42

Kong, Fanhui. "Asymptotic distributions of Buckley-James estimator." Online access via UMI:, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Schaap, Willem Egbert. "DTFE the Delaunay Tessellation Field Estimator /." [S.l. : [Groningen : s.n.] ; University Library Groningen] [Host], 2006. http://irs.ub.rug.nl/ppn/298831376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yu, Tianyi, and Jenny Edèn. "Traffic Situation Estimator for Adaptive CruiseControl." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-30075.

Full text
Abstract:
The Traffic Situation Estimator is a method that analyses vehicle behaviour by monitoring and counting the surrounding traffic. This is done with image analysis that keepstrack of several vehicles through consecutive frames under good lightning conditionson a straight one way road. The behaviour of the detected vehicles is then analysedin a state machine driven counter to estimate the traffic rhythm and determine if thedetected vehicles are approaching, getting away, have been overtaken or have overtakenthe ego-vehicle. Depending on the result the Traffic Situation Estimator suggest different reactions helping the driver to follow the traffic rhythm which will improve safetyand the energy efficiency. If the user is not following the traffic rhythm the applicationwill give advice to the user how to adapt to the traffic rhythm by driving faster, sloweror optionally suggest to overtake vehicles ahead.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Jianfeng. "Applications of a Robust Dispersion Estimator." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/776.

Full text
Abstract:
Robust estimators for multivariate location and dispersion should be ãn consistent and highly outlier resistant, but estimators that have been shown to have these properties are impractical to compute. The RMVN estimator is an easily computed outlier resistant robust ãn consistent estimator of multivariate location and dispersion, and the estimator is obtained by scaling the classical estimator applied to the gRMVN subseth that contains at least half of the cases. Several robust estimators will be presented, discussed and compared in detail. The applications for the RMVN estimator are numerous, and a simple method for performing robust principal component analysis (PCA), canonical correlation analysis (CCA) and factor analysis is to apply the classical method to the gRMVN subset.h Two approaches for robust PCA and CCA will be introduced and compared by simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
46

Linehan, Kelly A. "A study of shrinkage crack patterns." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq25860.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Booker, David Richard. "Volumetric shrinkage of spiked crude oils." Thesis, University of Exeter, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Krebs-Brown, Axel Johannes. "Shrinkage and calibration in multiple regression." Thesis, University of Warwick, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Holt, Erika E. "Early age autogenous shrinkage of concrete /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/10113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Couceiro, José. "Wood shrinkage in CT-scanning analysis." Licentiate thesis, Luleå tekniska universitet, Träteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-129.

Full text
Abstract:
Computed tomography (CT) can be used to study wood-water interactions in differentways, such as by determining wood moisture content (MC). The determination of MCrequires two CT images: one at the unknown moisture distribution and a second one ata known reference MC level, usually oven-dry MC. The two scans are then compared.If the goal is to determine the MC in local regions, when studying moisture gradients forinstance, wood shrinkage must be taken into account during the data processing of theimages. The anisotropy of wood shrinkage creates an obstacle, however, since theshrinkage is not uniform throughout the wood specimen. The objective of this thesis was to determine the shrinkage in wood in each pixel of aCT image. The work explores two different methods that estimate from CT images, thelocal shrinkage of a wood specimen between two different MC levels. The first methoddetermines shrinkage for each pixel using digital image correlation (DIC) and isembedded in a wider method to estimate the MC, which is the parameter verifiedagainst a reference. It involves several steps in different pieces of software, making ittime-consuming and creating many sources of possible experimental errors. The MCdetermined by this method showed a strong correlation with the gravimetricallymeasured MC, showing an R2 of 0.93 and the linear regression model predicted MCwith a RMSE of 1.4 MC percentage points. The second method uses the displacement information generated from the spatialalignment of the CT images in order to compute wood shrinkage in the radial andtangential directions. All the required steps are combined into a single computeralgorithm, which reduces the sources of error and facilitates the process. The RMSEbetween this method and the determination of shrinkage measured in the CT imagesusing CAD has shown acceptable small differences. Both methods have proved to be useful tools to deal with shrinkage in different ways byusing CT images. In one case MC was successfully estimated, being the shrinkagecalculation a necessary step in the process, and in the other case the radial and tangentialshrinkages were successfully estimated for each pixel. Nevertheless, the difficulty incomparing the shrinkage coefficient calculated for local regions with a reference valuesuggest that more research must be carried out in order to be able to draw reliableconclusions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography