Dissertations / Theses on the topic 'Unbiased Estimation of Estimator Variance'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 20 dissertations / theses for your research on the topic 'Unbiased Estimation of Estimator Variance.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kannappa, Sandeep Mavuduru. "Reduced Complexity Viterbi Decoders for SOQPSK Signals over Multipath Channels." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604300.
Full textHigh data rate communication between airborne vehicles and ground stations over the bandwidth constrained Aeronautical Telemetry channel is attributed to the development of bandwidth efficient Advanced Range Telemetry (ARTM) waveforms. This communication takes place over a multipath channel consisting of two components - a line of sight and one or more ground reflected paths which result in frequency selective fading. We concentrate on the ARTM SOQPSKTG transmit waveform suite and decode information bits using the reduced complexity Viterbi algorithm. Two different methodologies are proposed to implement reduced complexity Viterbi decoders in multipath channels. The first method jointly equalizes the channel and decodes the information bits using the reduced complexity Viterbi algorithm while the second method utilizes the minimum mean square error equalizer prior to applying the Viterbi decoder. An extensive numerical study is performed in comparing the performance of the above methodologies. We also demonstrate the performance gain offered by our reduced complexity Viterbi decoders over the existing linear receiver. In the numerical study, both perfect and estimated channel state information are considered.
Du, Jichang. "Covariate-matched estimator of the error variance in nonparametric regression." Diss., Online access via UMI:, 2007.
Find full textCarlsson, Martin. "Variance Estimation of the Calibration Estimator with Measurement Errors in the Auxiliary Information." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-68928.
Full textCardoso, João Nuno Martins. "Robust mean variance." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/10706.
Full textEste estudo empírico tem como objectivo avaliar o impacto da estimação robusta nos portefólios de média variância. Isto foi conseguido fazendo uma simulação do comportamento de 15 acções do SP500. Esta simulação inclui dois cenários: um com amostras que seguem uma distribuição normal e outro com amostras contaminadas não normais. Cada cenário inclui 200 reamostragens. O performance dos portefólios estimados usando a máxima verosimilhança (clássicos) e dos portefólios estimados de forma robusta são comparados, resultando em algumas conclusões: Em amostras normais, portefólios robustos são marginalmente menos eficientes que os portefólios clássicos. Contudo, em amostras não normais, os portefólios robustos apresentam um performance muito superior que os portefólios clássicos. Este acréscimo de performance está positivamente correlacionado com o nível de contaminação da amostra. Em suma, assumindo que os retornos financeiros têm uma distribuição não normal, podemos afirmar que os estimadores robustos resultam em portefólios de média variância mais estáveis.
This empirical study's objective is to evaluate the impact of robust estimation on mean variance portfolios. This was accomplished by doing a simulation on the behavior of 15 SP500 stocks. This simulation includes two scenarios: One with normally distributed samples and another with contaminated non-normal samples. Each scenario includes 200 resamples. The performance of maximum likelihood (classical) estimated portfolios and robustly estimated portfolios are compared, resulting in some conclusions: On normally distributed samples, robust portfolios are marginally less efficient than classical portfolios. However, on non-normal samples, robust portfolios present a much higher performance than classical portfolios. This increase in performance is positively correlated with the level of contamination present on the sample. In summary, assuming that financial returns do not present a normal distribution, we can state that robust estimators result in more stable mean variance portfolios.
Sadeghkhani, Abdolnasser. "Estimation d'une densité prédictive avec information additionnelle." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11238.
Full textAbstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
Baba, Harra M'hammed. "Estimation de densités spectrales d'ordre élevé." Rouen, 1996. http://www.theses.fr/1996ROUES023.
Full textNaftali, Eran 1971. "First order bias and second order variance of the Maximum Likelihood Estimator with application to multivariate Gaussian data and time delay and Doppler shift estimation." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88334.
Full textHarti, Mostafa. "Estimation robuste sous un modèle de contamination non symétrique et M-estimateur multidimensionnel." Nancy 1, 1986. http://www.theses.fr/1986NAN10063.
Full textKrishnan, Rajet. "Problems in distributed signal processing in wireless sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1351.
Full textTeixeira, Marcos Vinícius. "Estudos sobre a implementação online de uma técnica de estimação de energia no calorímetro hadrônico do atlas em cenários de alta luminosidade." Universidade Federal de Juiz de Fora (UFJF), 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/4169.
Full textApproved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-25T15:26:43Z (GMT) No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)
Made available in DSpace on 2017-04-25T15:26:43Z (GMT). No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5) Previous issue date: 2015-08-21
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho tem como objetivo o estudo de técnicas para a estimação da amplitude de sinais no calorímetro de telhas (TileCal) do ATLAS no LHC em cenários de alta luminosidade. Em alta luminosidade, sinais provenientes de colisões adjacentes são observados, ocasionando o efeito de empilhamento de sinais. Neste ambiente, o método COF (do inglês, Constrained Optimal Filter), apresenta desempenho superior ao algoritmo atualmente implementado no sistema. Entretanto, o COF requer a inversão de matrizes para o cálculo da pseudo-inversa de uma matriz de convolução, dificultando sua implementação online. Para evitar a inversão de matrizes, este trabalho apresenta métodos interativos, para a daptação do COF, que resultam em operações matemáticas simples. Baseados no Gradiente Descendente, os resultados demonstraram que os algoritmos são capazes de estimar a amplitude de sinais empilhados, além do sinal de interesse com eficiência similar ao COF. Visando a implementação online, este trabalho apresenta estudos sobre a complexidade dos métodos iterativos e propõe uma arquitetura de processamento em FPGA. Baseado em uma estrutura sequencial e utilizando lógica aritmética em ponto fixo, os resultados demonstraram que a arquitetura desenvolvida é capaz executar o método iterativo, atendendo os requisitos de tempo de processamento exigidos no TileCal.
This work aims at the study of techniques for online energy estimation in the ATLAS hadronic Calorimeter (TileCal) on the LHC collider. During further periods of the LHC operation, signals coming from adjacent collisions will be observed within the same window, producing a signal superposition. In this environment, the energy reconstruction method COF (Constrained Optimal Filter) outperforms the algorithm currently implemented in the system. However , the COF method requires an inversion of matrices and its online implementation is not feasible. To avoid such inversion of matrices, this work presents iteractive methods to implement the COF, resulting in simple mathematical operations. Based on the Gradient Descent, the results demonstrate that the algorithms are capable of estimating the amplitude of the superimposed signals with efficiency similar to COF. In addition, a processing architecture for FPGA implementation is proposed. The analysis has shown that the algorithms can be implemented in the new TilaCal electronics, reaching the processing time requirements.
Lardin, Pauline. "Estimation de synchrones de consommation électrique par sondage et prise en compte d'information auxiliaire." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00842199.
Full textÖhman, Marie-Louise. "Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests." Doctoral thesis, Umeå universitet, Statistiska institutionen, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-7313.
Full textdigitalisering@umu
Xu, Fu-Min, and 許富閔. "Assessment of the minimum variance unbiased estimator for evaluation of average bioequivalence." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/79829770556256571089.
Full text國立成功大學
統計學系碩博士班
92
The research and development of an innovative drug product in the average take 10-12 years and US $ 800 million dollars. Therefore, it is a costly, time-consuming, and highly risky endeavor. One way to reduce the drug cost is to introduce generic drugs after the patent of the innovative drugs expires. Currently, most regulatory agencies in the world only require evidence of average bioequivalence from in vivo bioequivalence trials to approve the generic drugs. Currently, maximum likelihood estimator (MLE) is recommended for evaluation of average bioequivalence. However, we considered to adopt the minimum variance unbiased estimator (MVUE) to assess the average bioequivalence. We performed a simulation study to compare the bias, mean square error, empirical size, empirical power and 90% confidence coefficient between MLE and MVUE on the various combinations of parameters and sample size under 2 2 crossover design and higher-order crossover design.
Gatarayiha, Jean Philippe. "Méthode de simulation avec les variables antithétiques." Thèse, 2007. http://hdl.handle.net/1866/9923.
Full textIn this master thesis, we consider simulation methods based on antithetic variates for estimate integrales of f(x) on interval (0,1] where f is monotonic function, not a monotonic function or a function difficult to integrate. The main idea consists in subdividing the (0,1] in m sections of which each one is subdivided in l subintervals. This method is done recursively. At each step the variance decreases, i.e. The variance obtained at the kth step is smaller than that is found at the (k-1)th step. This allows us to reduce the error in the estimation because the estimator of integrales of f(x) on interval [0,1] is unbiased. The objective is to optimize m.
Les fichiers qui accompagnent mon document ont été réalisés avec le logiciel Latex et les simulations ont été réalisés par Splus(R).
Henderson, Tamie, and Tamie Anakotta. "Estimating the variance of the Horvitz-Thompson estimator." Thesis, 2006. http://hdl.handle.net/1885/10608.
Full textBoulanger, Laurence. "Comparaison d'estimateurs de la variance du TMLE." Thèse, 2018. http://hdl.handle.net/1866/22542.
Full textKrishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation." Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3293.
Full textKrishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation." Thesis, 2013. http://hdl.handle.net/2005/3293.
Full textBéliveau, Audrey. "Estimation simplifiée de la variance dans le cas de l’échantillonnage à deux phases." Thèse, 2011. http://hdl.handle.net/1866/6279.
Full textIn this thesis we study the problem of variance estimation for the double expansion estimator and the calibration estimators in the case of two-phase designs. We suggest to use a variance decomposition different from the one usually used in two-phase sampling, which leads to a simplified variance estimator. We look for the necessary conditions for the simplified variance estimators to be appropriate. In order to do so, we consider the following particular cases : (1) Poisson design at the second phase, (2) two-stage design, (3) simple random sampling at each phase, (4) simple random sampling at the second phase. We show that a crucial condition for the simplified variance estimator to be valid in cases (1) and (2) is that the first phase sampling fraction must be negligible (or small). We also show in cases (3) and (4) that the simplified variance estimator can be used with some calibration estimators when the first phase sampling fraction is negligible and the population size is large enough. Furthermore, we show that the simplified estimators can be obtained in an alternative way using the reversed approach (Fay, 1991 and Shao and Steel, 1999). Finally, we conduct some simulation studies in order to validate the theoretical results.
(13991187), Joseph W. Daley. "Mixed model methods for quantitative trait loci estimation in crosses between outbred lines." Thesis, 2003. https://figshare.com/articles/thesis/Mixed_model_methods_for_quantitative_trait_loci_estimation_in_crosses_between_outbred_lines/21376767.
Full textMethodology is developed for Quantitative Trait Loci (QTL) analysis in F2 and backcross designed experiments between outbred lines using a mixed model framework through the modification of segment mapping techniques. Alleles are modelled in the F1 and parental generations allowing the estimation of individual additive allele effects while accounting for QTL segregation within lines as well as differences in mean QTL effects between lines.
Initially the theory, called F1 origin mapping, is developed for a single trait scenario involving possible multiple QTL and polygenic variation. Additive genetic variances are estimated via Restricted Maximum Likelihood (REML) and allele effects are modelled using Best Linear Unbiased Prediction (BLUP). Simulation studies are carried out comparing F1 origin mapping with existing segment mapping methods in a number of genetic scenarios. While there was no significant difference in the estimation of effects between the two methods the average CPU time of one hundred replicates was 0.26 seconds for F1 origin mapping and 3.77 seconds for the segment mapping method. This improvement in computation efficiency is due to the restructuring of IBD matrices which result in the inversion and REML iteration over much smaller matrices.
Further theory is developed which extends F1 origin mapping from single to multiple trait scenarios for F2 crosses between outbred lines. A bivariate trait is simulated using a single QTL with and without a polygenic component. A single trait and bivariate trait analysis are performed to compare the two approaches. There was no significant difference in the estimation of QTL effects between the two approaches. However, there was a slight improvement in the accuracy of QTL position estimates in the multiple trait approach. The advantage of F1 origin mapping with regard to computational efficiency becomes even more important with multiple trait analysis and allows the investigation of interesting biological models of gene expression.
F1 origin mapping is developed further to model the correlation structure inherent in repeated measures data collected on F2 crosses between outbred lines. A study was conducted to show that repeated measures F1 origin mapping and multiple trait F1 origin mapping give similar results in certain circumstances. Another simulation study was also conducted in which five regular repeated measures where simulated with allele breed difference effects and allele variances increasing linearly over time. Various polynomial orders of fit where investigated with the linear order of fit most parsimoniously modelling the data. The linear order of fit correctly identified the increasing trend in both the additive allele difference and allele variance. Repeated measures F1 origin mapping possesses the benefits of using the correlated nature of repeated measures while increasing the efficiency of QTL parameter estimation. Hence, it would be useful for QTL studies on measurements such as milk yield or live weights when collected at irregular intervals.
Theory is developed to combine the data from QTL studies involving F2 and backcross designed experiments. Genetic covariance matrices are developed for random QTL effects by modelling allele variation in the parental generation instead of the offspring generation for an F2 and backcross between outbred lines. The result is a general QTL estimation method called parental origin mapping. Phenotypes and genotypes from such a study involving Romney and Merino sheep are analysed providing evidence for a QTL affecting adult and hogget fibre diameter.
By coupling these new methods with computer software programs such as ASREML, F1 origin mapping and parental origin mapping provide powerful and flexible tools for QTL studies with the ability to efficiently handle single traits, multiple traits and repeated measures.