Tesi sul tema "Estimation de normales"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Estimation de normales.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Estimation de normales".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Criticou, Doukissa. "Estimateurs à rétrécisseurs (cas de distributions normales) : une classe d'estimateurs bayesiens". Rouen, 1986. http://www.theses.fr/1986ROUES050.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
On s'intéresse au traitement bayésien de l'estimation de la moyenne dans le modèle linéaire gaussien, quand: 1) la variance est connue à un facteur multiplicatif près; 2) le coût utilisé est quadratique (inverse de la variance); 3) la probabilité a priori est un mélange de lois gaussiennes
2

Rieux, Frédéric. "Processus de diffusion discret : opérateur laplacien appliqué à l'étude de surfaces". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20201/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le contexte est la géométrie discrète dans Zn. Il s'agit de décrire les courbes et surfaces discrètes composées de voxels: les définitions usuelles de droites et plans discrets épais se comportent mal quand on passe à des ensembles courbes. Comment garantir un bon comportement topologique, les connexités requises, dans une situation qui généralise les droites et plans discrets?Le calcul de données sur ces courbes, normales, tangentes, courbure, ou des fonctions plus générales, fait appel à des moyennes utilisant des masques. Une question est la pertinence théorique et pratique de ces masques. Une voie explorée, est le calcul de masques fondés sur la marche aléatoire. Une marche aléatoire partant d'un centre donné sur une courbe ou une surface discrète, permet d'affecter à chaque autre voxel un poids, le temps moyen de visite. Ce noyau permet de calculer des moyennes et par là, des dérivées. L'étude du comportement de ce processus de diffusion, a permis de retrouver des outils classiques de géométrie sur des surfaces maillées, et de fournir des estimateurs de tangente et de courbure performants. La diversité du champs d'applications de ce processus de diffusion a été mise en avant, retrouvant ainsi des méthodes classiques mais avec une base théorique identique.} motsclefs{Processus Markovien, Géométrie discrète, Estimateur tangentes, normales, courbure, Noyau de diffusion, Analyse d'images
The context of discrete geometry is in Zn. We propose to discribe discrete curves and surfaces composed of voxels: how to compute classical notions of analysis as tangent and normals ? Computation of data on discrete curves use average mask. A large amount of works proposed to study the pertinence of those masks. We propose to compute an average mask based on random walk. A random walk starting from a point of a curve or a surface, allow to give a weight, the time passed on each point. This kernel allow us to compute average and derivative. The studied of this digital process allow us to recover classical notions of geometry on meshes surfaces, and give accuracy estimator of tangent and curvature. We propose a large field of applications of this approach recovering classical tools using in transversal communauty of discrete geometry, with a same theorical base
3

Charton, Jerome. "Etude de caractéristiques saillantes sur des maillages 3D par estimation des normales et des courbures discrètes". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0333/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans l'objectif d'améliorer et d'automatiser la chaîne de reproductiond'objet qui va de l'acquisition à l'impression 3D. Nous avons cherché à caractériserde la saillance sur les objets 3D modélisés par la structure d'un maillage 3D.Pour cela, nous avons fait un état de l'art des méthodes d'estimation des proprié-tés différentielles, à savoir la normale et la courbure, sur des surfaces discrètes sousla forme de maillage 3D. Pour comparer le comportement des différentes méthodes,nous avons repris un ensemble de critères de comparaison classique dans le domaine,qui sont : la précision, la convergence et la robustesse par rapport aux variations duvoisinage. Pour cela, nous avons établi un protocole de tests mettant en avant cesqualités. De cette première comparaison, il est ressorti que l'ensemble des méthodesexistantes présentent des défauts selon ces différents critères. Afin d'avoir une estimationdes propriétés différentielles plus fiable et précise nous avons élaboré deuxnouveaux estimateurs
With the aim to improve and automate the object reproduction chainfrom acquisition to 3D printing .We sought to characterize the salience on 3D objectsmodeled by a 3D mesh structure. For this, we have a state of the art of estimatingdifferential properties methods, namely normal and curvature on discrete surfaces inthe form of 3D mesh. To compare the behavior of different methods, we took a set ofclassic benchmarks in the domain, which are : accuracy, convergence and robustnesswith respect to variations of the neighbourhood. For this, we have established atest protocol emphasizing these qualities. From this first comparision, it was foundthat all the existing methods have shortcomings as these criteria. In order to havean estimation of the differential properties more reliable and accurate we developedtwo new estimators
4

Caracotte, Jordan. "Reconstruction 3D par stéréophotométrie pour la vision omnidirectionnelle". Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0031.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse s'intéresse à la reconstruction 3D par stéréophotométrie et à la vision omnidirectionnelle. La stéréophotométrie repose sur l'exploitation de plusieurs photographies d'une scène, capturées sous différents éclairages par un même appareil immobile. La vision omnidirectionnelle rassemble les capteurs ainsi que les assemblages de caméras qui permettent de faire l'acquisition en une image d'une très grande portion de l'environnement autour de l'appareil. En s'appuyant sur quatre décennies de travaux en stéréophotométrie et en utilisant le modèle unifié pour la projection centrale, nous tentons de réunir ces deux domaines de recherche. Dans un premier temps, nous nous intéressons aux techniques pour l'estimation des normales à la surface ainsi qu'à l'intégration des gradients de profondeur, dans le but de retrouver la géométrie de la scène étudiée. Nous poursuivons en introduisant une nouvelle équation de l'irradiance ainsi qu'une chaîne de traitements modulaire, toutes deux adaptées aux capteurs à projection centrale. Le fonctionnement de la méthode est vérifié en utilisant une caméra perspective et un capteur catadioptrique. Nous détaillons ensuite une extension de la méthode afin de permettre de réaliser des reconstructions 3D à partir d'images issues de caméras twin-fisheye. Finalement, nous nous intéresserons aux limites de l'approche proposée ainsi qu'aux pistes d'amélioration envisagées
This thesis focuses on the photometric stereo problem and the omnidirectional vision. The photometric stereo problem is a 3D-reconstruction technique which requires several pictures of a surface under different lighting conditions from a single point of view. The omnidirectional vision encompasses the devices and the rig of cameras that capture a large part of the environment around them in a single image. Following four decades of research in the photometric stereo literature and using the unified model for central projection cameras, we try to merge these research fields. We first focus on techniques for estimating the normals to the surface, and for integrating the depth gradients to retrieve the shape. Then, we introduce a new spherical irradiance equation that we use to solve the photometric stereo problem using two central projection cameras. The approach is validated using synthetic and real images from a perspective camera and a catadioptric imaging device. We later extends the approach to perform 3d-reconstruction by photometric stereo using twin-fisheye cameras. Finally, we study some limitations of the approch and we discuss the ways to overcome these limits
5

Terzakis, Demètre. "Estimateurs à rétrécisseurs (cas de distributions normales à variance connue à un facteur près) : contrôle de l'emploi de la différence entre l'observation et l'estimation des moindres carrés". Rouen, 1987. http://www.theses.fr/1987ROUES015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans le cadre général de l'estimation de la moyenne (supposée appartenir à v, sous-espace vectoriel strict de l'espace des observations) d'une loi normale multidimensionnelle y de variance connue à un facteur multiplicatif près, on cherche des conditions de domination de l'estimateur des moindres carrés t par des estimateurs à rétrécisseurs (autrement dit de James-Stein) de la forme t(y)-h(t(y), y-t(y))c(t(y)), où c est un endomorphisme de v. De nombreuses conditions "classiques" de domination apparaissent comme des corollaires des notres dans le cas ou t(y) et y-t(y) n'interviennent dans l'expression de h que par les valeurs prises par des formes quadratiques définies respectivement sur v et sur w (orthogonal de v pour la forme bilinéaire symétrique associée à l'inverse de la variance). On constate que des hypothèses de nature algébrique contraignante sur c permettent d'éviter l'usage d'hypothèses de différentiabilité sur h; quand celles-ci sont introduites, on distinguera selon qu'on utilise, relativement à la deuxième variable dans l'expression de h, la différentielle partielle (application de v x w dans le dual de w) ou la dérivée partielle suivant un vecteur privilégié de w
6

Lemaire, Jacques. "Étude de propriétés asymptotiques en classification". Nice, 1990. http://www.theses.fr/1990NICE4379.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'objectif de cette thèse est l'étude de propriétés de convergence des estimateurs construits dans un problème de classification d'un nombre fini de données, lorsque ce nombre tend vers l'infini. Un cadre probabiliste, pour cette étude est d'abord précisé. Le modèle sous-jacent est celui d'une décomposition sous la forme d'un mélange, de la loi de probabilité des observations. Nous citons quelques résultats de convergence montrant dans quelle mesure, des estimations consistantes des composantes du mélange étudié, peuvent produire des règles de classement asymptotiquement optimales. Ce résultat nous a incité à faire une étude bibliographique synthétique des méthodes d'estimation des composantes d'un mélange de densités de lois de probabilité. Nous abordons ensuite le cas des méthodes centroïdes, plus simples à mettre en œuvre. Dans une problématique ensembliste, nous avons tenté de généraliser plusieurs résultats asymptotiques antérieurs, en utilisant la notion d'integrande normale, introduite par Berliochi et Lassry, puis, dans un cadre d'approximation stochastique, le lemme de Robbins et Siegmund, de la théorie des martingales. Les résultats obtenus sont illustrés par une simulation numérique. Quatre annexes complètent ce document ; elles portent essentiellement sur les outils mathématiques utilisés
7

Valdivia, Paola Tatiana Llerena. "Correção de normais para suavização de nuvens de pontos". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19032014-145046/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nos anos recentes, suavização de superfícies é um assunto de intensa pesquisa em processamento geométrico. Muitas das abordagens para suavização de malhas usam um esquema de duas etapas: filtragem de normais seguido de um passo de atualização de vértices para corresponder com as normais filtradas. Neste trabalho, propomos uma adaptação de tais esquemas de duas etapas para superfícies representadas por nuvens de pontos. Para isso, exploramos esquemas de pesos para filtrar as normais. Além disso, investigamos três métodos para estimar normais, analisando o impacto de cada método para estimar normais em todo o processo de suavização da superfície. Para uma análise quantitativa, além da comparação visual convencional, avaliamos a eficácia de diferentes opções de implementação usando duas medidas, comparando nossos resultados com métodos de suavização de nuvens de pontos encontrados a literatura
In the last years, surface denoising is a subject of intensive research in geometry processing. Most of the recent approaches for mesh denoising use a twostep scheme: normal filtering followed by a point updating step to match the corrected normals. In this work, we propose an adaptation of such two-step approaches for point-based surfaces, exploring three different weight schemes for filtering normals. Moreover, we also investigate three techniques for normal estimation, analyzing the impact of each normal estimation method in the whole point-set smoothing process. Towards a quantitative analysis, in addition to conventional visual comparison, we evaluate the effectiveness of different choices of implementation using two measures, comparing our results against state-of-art point-based denoising techniques. Keywords: surface smoothing; point-based surface; normal estimation; normal filtering.
8

Wage, Kathleen E. "Adaptive estimation of acoustic normal modes". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Grip, Marcus. "Tyre Performance Estimation during Normal Driving". Thesis, Linköpings universitet, Fordonssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176558.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Driving with tyres not appropriate for the actual conditions can not only lead to accidents related to the tyres, but also cause detrimental effects on the environment via emission of rubber particles if the driving conditions are causing an unexpectedly high amount of tread wear. Estimating tyre performance in an online setting is therefore of interest, and the feasibility to estimate friction performance, velocity performance, and tread wear utilizing available information from the automotive grade sensors is investigated in this thesis. For the friction performance, a trend analysis is performed to investigate the correlation between tyre stiffness and friction potential. Given that there is a correlation, a model is derived based on the trend having a stiffness parameter as an input in order to predict the friction performance. Tendencies for a linear trend is shown, and a linear regression model is fitted to data and is evaluated by calculating a model fit and studying the residuals. Having a model fit of $80\%$, the precision of the expected values stemming from the proposed model is concluded to be fairly low, but still enough to roughly indicate the friction performance in winter conditions. A tread wear model that can estimate the amount of abrasive wear is also derived, and the proposed model only utilizes available information from the automotive grade sensors. Due to the model having a parameter that is assumed to be highly tyre specific, only a relative wear difference can be calculated. The model is evaluated in a simulation environment by its ability to indicate if a tyre is under the influence of a higher wear caused by a higher ambient temperature. The results indicates that the model is insufficient in an online setting and cannot accurately describe the phenomena of softer tyres having a larger amount of wear caused by a high ambient temperature compared to stiffer tyres. Lastly, a double lane change test (ISO 3888-2) is conducted to determine the critical velocity for cornering manoeuvres, which defines the velocity performance. The test was executed for six different sets of tyres, two of each type (winter, all-season, and summer). The approach to estimate the velocity performance in an online setting is analogue to that of the friction performance, and a trend analysis is performed to investigate the correlation between longitudinal tyre stiffness and the critical velocity. The results are rather unexpected and shows no substantial differences in velocity performance, even though the tyre-road grip felt distinctively worse for the softer tyres according to the driver. It is concluded that the bias stemming from the professional driver's skills might have distorted the results, and that another approach might need to be considered in order to estimate this performance.
10

Fernandez-Abrevaya, Victoria. "Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D". Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les modèles du visage 3D fondés sur des données sont une direction prometteuse pour capturer les subtilités complexes du visage humain, et une composante centrale de nombreuses applications grâce à leur capacité à simplifier des tâches complexes. La plupart des approches basées sur les données à ce jour ont été construites à partir d’un nombre limité d’échantillons ou par une augmentation par données synthétiques, principalement en raison de la difficulté à obtenir des scans 3D à grande échelle. Pourtant, il existe une quantité substantielle d’informations qui peuvent être recueillies lorsque l’on considère les sources publiquement accessibles qui ont été capturées au cours de la dernière décennie, dont la combinaison peut potentiellement apporter des modèles plus puissants.Cette thèse propose de nouvelles méthodes pour construire des modèles de la géométrie du visage 3D fondés sur des données, et examine si des performances améliorées peuvent être obtenues en apprenant à partir d’ensembles de données vastes et variés. Afin d’utiliser efficacement un grand nombre d’échantillons d’apprentissage, nous développons de nouvelles techniques d’apprentissage profond conçues pour gérer efficacement les données faciales tri-dimensionnelles. Nous nous concentrons sur plusieurs aspects qui influencent la géométrie du visage : ses composantes de forme, y compris les détails, ses composants de mouvement telles que l’expression, et l’interaction entre ces deux sous-espaces.Nous développons notamment deux approches pour construire des modèles génératifs qui découplent l’espace latent en fonction des sources naturelles de variation, e.g.identité et expression. La première approche considère une nouvelle architecture d’auto-encodeur profond qui permet d’apprendre un modèle multilinéaire sans nécessiter l’assemblage des données comme un tenseur complet. Nous proposons ensuite un nouveau modèle non linéaire basé sur l’apprentissage antagoniste qui davantage améliore la capacité de découplage. Ceci est rendu possible par une nouvelle architecture 3D-2D qui combine un générateur 3D avec un discriminateur 2D, où les deux domaines sont connectés par une couche de projection géométrique.En tant que besoin préalable à la construction de modèles basés sur les données, nous abordons également le problème de mise en correspondance d’un grand nombre de scans 3D de visages en mouvement. Nous proposons une approche qui peut gérer automatiquement une variété de séquences avec des hypothèses minimales sur les données d’entrée. Ceci est réalisé par l’utilisation d’un modèle spatio-temporel ainsi qu’une initialisation basée sur la régression, et nous montrons que nous pouvons obtenir des correspondances précises d’une manière efficace et évolutive.Finalement, nous abordons le problème de la récupération des normales de surface à partir d’images naturelles, dans le but d’enrichir les reconstructions 3D grossières existantes. Nous proposons une méthode qui peut exploiter toutes les images disponibles ainsi que les données normales, qu’elles soient couplées ou non, grâce à une nouvelle architecture d’apprentissage cross-modale. Notre approche repose sur un nouveau module qui permet de transférer les détails locaux de l’image vers la surface de sortie sans nuire aux performances lors de l’auto-encodage des modalités, en obtenant des résultats de pointe pour la tâche
Data-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task
11

Roux, Hannaline. "Construction and parameter estimation of wrapped normal models". Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/77880.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
If a known distribution on a real line is given, it can be wrapped on the circumference of a unit circle. This research entails the study of a univariate skew-normal distribution where the skew-normal distribution is generalised for the case of bimodality. Both the skew-normal and exible generalised skew-normal distributions are wrapped onto a unit circle, consequently referred to as a wrapped skew-normal and a wrapped exible generalised skew-normal distribution respectively. For each of these distributions a simulation study is conducted, where the performance of maximum likelihood estimation is evaluated. Skew scale mixtures of normal distributions with the wrapped version of these distributions are proposed and graphical representations are provided. These distributions are also compared in an application to wind direction data.
Dissertation (MSc)--University of Pretoria, 2019.
Statistics
MSc
Unrestricted
12

Stephens, Matthew. "Bayesian methods for mixtures of normal distributions". Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242056.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Qumsiyeh, Sahar Botros. "Non-normal Bivariate Distributions: Estimation And Hypothesis Testing". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608941/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
When using data for estimating the parameters in a bivariate distribution, the tradition is to assume that data comes from a bivariate normal distribution. If the distribution is not bivariate normal, which often is the case, the maximum likelihood (ML) estimators are intractable and the least square (LS) estimators are inefficient. Here, we consider two independent sets of bivariate data which come from non-normal populations. We consider two distinctive distributions: the marginal and the conditional distributions are both Generalized Logistic, and the marginal and conditional distributions both belong to the Student&rsquo
s t family. We use the method of modified maximum likelihood (MML) to find estimators of various parameters in each distribution. We perform a simulation study to show that our estimators are more efficient and robust than the LS estimators even for small sample sizes. We develop hypothesis testing procedures using the LS and the MML estimators. We show that the latter are more powerful and robust. Moreover, we give a comparison of our tests with another well known robust test due to Tiku and Singh (1982) and show that our test is more powerful. The latter is based on censored normal samples and is quite prominent (Lehmann, 1986). We also use our MML estimators to find a more efficient estimator of Mahalanobis distance. We give real life examples.
14

Rasoafaraniaina, Rondrotiana J. "Preliminary test estimation in uniformly locally and asymptotically normal models". Doctoral thesis, Universite Libre de Bruxelles, 2020. https://dipot.ulb.ac.be/dspace/bitstream/2013/312253/4/Contents.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The present thesis provides a general asymptotic theory for preliminary test estimators (PTEs). PTEs are typically used when one needs to estimate a parameter having some uncertain prior information about it. In the literature, preliminary test estimation have been applied to some specific models, but no general asymptotic theory was already available to the best of our knowledge. After a study of PTEs in a multisample principal component context, we first provide a general asymptotic theory for PTEs in uniformly, locally and asymptotically normal (ULAN) models. An extensive list of statistical and econometric models are ULAN making our results quite general. Our main results are obtained using the Le Cam asymptotic theory under the assumption that the estimators involved in the PTEs admit Bahadur-type asymptotic representations. Then, we propose PTEs involving multiple tests and therefore multiple constrained estimators; we call them preliminary multiple test estimators. For the latter, we also derive a very general asymptotic theory in ULAN models. Our theoretical results are illustrated on problems involving the estimation of covariance matrices both via simulations and a real data example.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
15

Nogueira, Josà Ivan Mota. "Uma estimativa interior do gradiente para a equaÃÃo da curvatura mÃdia em variedades riemannianas". Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8636.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior
Deduzimos uma estimativa interior do gradiente para a equaÃÃo da curvatura mÃdia para grÃficos de Killing em variedades riemanianas inspirado na tÃcnica de pertubaÃÃes normais devido a N. Korevaar.
We deduce an interior gradient estimate for the mean curvature equation for Killing graphs in Riemannian manifolds inspired by the normal perturbation technique due to N. Korevaar.
16

Eiesland, Ole Wostryck. "Estimating seabed velocities from normal modes". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18514.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this Master Thesis a method for estimating seabed p-wave velocities from normal mode seismic data is developed. This is done through forward modeling using two dimensional finite difference modeling to generate synthetic data based on a given parallel two layered laterally varying seabed velocity model and a constant two layered density model, with a common fixed water depth. A semblance inversion technique is developed in MATLAB using the period equation eqref{eq:period} and the resulting velocity profiles is plotted against the exact velocity model to check the validity of the estimates. The same method is extended to estimations of seabed densities. For analysis of the robustness of the method, analysis with added pseudo random noise is preformed.The results shows a good performance of the semblance method to reproduce the model velocity parameters. The introduction of noise is handled well and decent results are obtained for significantly low signal to noise ratios.It suggests that the semblance method is applicable to use for determination of other parameters influencing the normal mode response signal.
17

Cheevatanarak, Suchittra. "A Comparison of Multivariate Normal and Elliptical Estimation Methods in Structural Equation Models". Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc278401/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the present study, parameter estimates, standard errors and chi-square statistics were compared using normal and elliptical estimation methods given three research conditions: population data contamination (10%, 20%, and 30%), sample size (100, 400, and 1000), and kurtosis (kappa =1,10, 20).
18

Al, Hassan Ahmad. "Estimation des lois extremes multivariees". Paris 6, 1988. http://www.theses.fr/1988PA066014.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Soit (x::(1),y::(1)). . . (x::(n),y::(n)) un echantillon du vecteur aleatoire extreme (x,y) i. I. D. Suivant l'un des modeles : logistique, gumbel, mixte, naturel. En reduisant les informations par des procedes nouveaux, on presente des resultats originaux sur le probleme d'estimation des parametres de liaison de (x,y) et en faisant des tests bases sur ces estimateurs. Finalement, on etablit quelques resultats sur le probleme d'estimation non parametrique d'une fonction de dependance
19

Matos, Larissa Avila 1987. "Modelos lineares e não lineares de efeitos mistos para respostas censuradas usando as distribuições normal e t-Student multivariadas". [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306684.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Orientador: Víctor Hugo Lachos Dávila
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-20T06:44:43Z (GMT). No. of bitstreams: 1 Matos_LarissaAvila_M.pdf: 2008810 bytes, checksum: 0aee0c4f4bbf58ba67490d26cdd300ba (MD5) Previous issue date: 2012
Resumo: Modelos mistos são geralmente usados para representar dados longitudinais ou de medidas repetidas. Uma complicação adicional surge quando a resposta é censurada, por exemplo, devido aos limites de quantificação do ensaio utilizado. Distribuições normais para os efeitos aleatórios e os erros residuais são geralmente assumidas, mas tais pressupostos fazem as inferências vulneráveis, 'a presença de outliers. Motivados por uma preocupação da sensibilidade para potenciais outliers ou dados com caudas mais pesadas do que a normal, pretendemos desenvolver nessa dissertação, inferência para modelos lineares e não lineares de efeito misto censurados (NLMEC / LMEC) com base na distribui ção t- Student multivariada, sendo uma alternativa flexível ao uso da distribuição normal correspondente. Propomos um algoritmo ECM para computar as estimativas de máxima verossimilhança para os NLMEC / LMEC. Este algoritmo utiliza expressões fechadas no passo-E, que se baseia em fórmulas para a média e a variância de uma distribui ção t-multivariada truncada. O algoritmo proposto é implementado, pacote tlmec do R. Também propomos aqui um algoritmo ECM exato para os modelos lineares e não lineares de efeito misto censurados, com base na distribuição normal multivariada, que nos permite desenvolver análise de influência local para modelos de efeito misto com base na esperança condicional da função log-verossilhança dos dados completos. Os procedimentos desenvolvidos são ilustrados com a análise longitudinal da carga viral do HIV, apresentada em dois estudos recentes sobre a AIDS
Abstract: Mixed models are commonly used to represent longitudinal or repeated measures data. An additional complication arises when the response is censored, for example, due to limits of quantification of the assay used. Normal distributions for random effects and residual errors are usually assumed, but such assumptions make inferences vulnerable to the presence of outliers. Motivated by a concern of sensitivity to potential outliers or data with tails longer-than-normal, we aim to develop in this dissertation inference for linear and nonlinear mixed effects models with censored response (NLMEC/LMEC) based on the multivariate Student-t distribution, being a flexible alternative to the use of the corresponding normal distribution. We propose an ECM algorithm for computing the maximum likelihood estimates for NLMEC/LMEC. This algorithm uses closed-form expressions at the E-step, which relies on formulas for the mean and variance of a truncated multivariate-t distribution. The proposed algorithm is implemented in the R package tlmec. We also propose here an exact ECM algorithm for linear and nonlinear mixed effects models with censored response based on the multivariate normal distribution, which enable us to developed local influence analysis for mixed effects models on the basis of the conditional expectation of the complete-data log-likelihood function. The developed procedures are illustrated with two case studies, involving the analysis of longitudinal HIV viral load in two recent AIDS studies
Mestrado
Estatistica
Mestre em Estatística
20

Fourdrinier, Dominique. "Deux extensions des résultats classiques sur les estimateurs à rétrécisseur : cas de rétrécisseurs non différentiables ; cas de lois à symétrie sphérique". Rouen, 1986. http://www.theses.fr/1986ROUES043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nous considérons une classe d'estimateurs à rétrécisseur de la moyenne d'une loi normale multidimensionnelle (documents 1 et 2) et plus généralement du paramètre de position d'une loi à symétrie sphérique (document 3). Nous établissons des conditions suffisantes de domination uniforme de l'estimateur des moindres carrés relativement à un coût quadratique quelconque (documents 1 et 2) et, plus généralement, relativement à une classe de lois à symétrie sphérique et une famille de coûts quadratiques associée à cette classe (document 3)
21

Eren, Emrah. "Effect Of Estimation In Goodness-of-fit Tests". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12611046/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In statistical analysis, distributional assumptions are needed to apply parametric procedures. Assumptions about underlying distribution should be true for accurate statistical inferences. Goodness-of-fit tests are used for checking the validity of the distributional assumptions. To apply some of the goodness-of-fit tests, the unknown population parameters are estimated. The null distributions of test statistics become complicated or depend on the unknown parameters if population parameters are replaced by their estimators. This will restrict the use of the test. Goodness-of-fit statistics which are invariant to parameters can be used if the distribution under null hypothesis is a location-scale distribution. For location and scale invariant goodness-of-fit tests, there is no need to estimate the unknown population parameters. However, approximations are used in some of those tests. Different types of estimation and approximation techniques are used in this study to compute goodness-of-fit statistics for complete and censored samples from univariate distributions as well as complete samples from bivariate normal distribution. Simulated power properties of the goodness-of-fit tests against a broad range of skew and symmetric alternative distributions are examined to identify the estimation effects in goodness-of-fit tests. The main aim of this thesis is to modify goodness-of-fit tests by using different estimators or approximation techniques, and finally see the effect of estimation on the power of these tests.
22

Markström, Ingemar. "Comparing normal estimation methods for the rendering of unorganized point clouds". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261598.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Surface normals are fundamental in computer graphics applications such as computer vision, object recognition, and lighting calculations. When working with unorganized point clouds of surfaces, there exists a need for fast and accurate normal estimation methods. This thesis presents the investigation and implementation of two different methods of normal estimation on fixed-size local neighborhoods in unorganized pointclouds. Two main categories of tests were conducted. The first type was visual inspection and the second consisted of numeric analysis of the normal estimation process and results. Point cloud data used in the study included numerically exact representations of spheres, cubes, cones, as well as both uniformly sampled or laser-scanned real-world point clouds with millions of points. Complete triangle averaging was found to be the method of choice on small neighborhoods, justified by faster running-time while still estimating high-quality normals. When larger neighborhood sizes were needed, a size breakpoint was found above which principal component analysis should be used instead, which estimates normals of similar quality as the complete triangle averaging but with the added benefit of near-constant running-time independent of neighborhood size.
Ytnormaler är fundamentalt viktiga i datorgrafiktillämpningar som exempelvis datorseende, objektigenkänning och belysningsberäkningar. Det finns ett behov av snabba och precisa beräkningsmetoder för uppskattade normaler vid hanteringen av o-organiserade punktmoln. I denna uppsats presenteras undersökningen och implementationen av två olika sätt att beräkna normaler för punkter i o-organiserade punktmoln från grannskap av förbestämt antal. Två huvudkategorier av tester utfördes. Den första typen var visuell inspektion och den andra bestod av numerisk analys av både beräkning och de beräknade normalerna. Punktmolnen som användes för undersökningarna inkluderade matematiskt korrekta sfärer, kuber och koner, samt både regelbundet samplade och laserskannade verkliga punktmoln med miljontals punkter. Komplett Triangulering av grannar visade sig vara den föredragna metoden för små grannskap, motiverad av kortare beräkningstid och högkvalitativt resultat. När antalet använda grannar steg kunde en brytpunkt ses, där ett byte till principalkomponentanalys kunde motiveras, då resultatet var normaler av likvärdig kvalitet, men med fördelen av nära konstant körtid oberoende av antalet använda grannar.
23

Tang, Qi. "Comparison of Different Methods for Estimating Log-normal Means". Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2338.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The log-normal distribution is a popular model in many areas, especially in biostatistics and survival analysis where the data tend to be right skewed. In our research, a total of ten different estimators of log-normal means are compared theoretically. Simulations are done using different values of parameters and sample size. As a result of comparison, ``A degree of freedom adjusted" maximum likelihood estimator and Bayesian estimator under quadratic loss are the best when using the mean square error (MSE) as a criterion. The ten estimators are applied to a real dataset, an environmental study from Naval Construction Battalion Center (NCBC), Super Fund Site in Rhode Island.
24

Akhter, A. S. "Estimating the parameters of the truncated normal distribution". Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Malsiner-Walli, Gertraud, Sylvia Frühwirth-Schnatter e Bettina Grün. "Identifying mixtures of mixtures using Bayesian estimation". Taylor & Francis, 2017. http://dx.doi.org/10.1080/10618600.2016.1200472.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The use of a finite mixture of normal distributions in model-based clustering allows to capture non-Gaussian data clusters. However, identifying the clusters from the normal components is challenging and in general either achieved by imposing constraints on the model or by using post-processing procedures. Within the Bayesian framework we propose a different approach based on sparse finite mixtures to achieve identifiability. We specify a hierarchical prior where the hyperparameters are carefully selected such that they are reflective of the cluster structure aimed at. In addition, this prior allows to estimate the model using standard MCMC sampling methods. In combination with a post-processing approach which resolves the label switching issue and results in an identified model, our approach allows to simultaneously (1) determine the number of clusters, (2) flexibly approximate the cluster distributions in a semi-parametric way using finite mixtures of normals and (3) identify cluster-specific parameters and classify observations. The proposed approach is illustrated in two simulation studies and on benchmark data sets.
26

Sadeghkhani, Abdolnasser. "Estimation d'une densité prédictive avec information additionnelle". Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11238.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles.
Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
27

Liang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models". Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
28

Fahey, Sean O'Flaherty. "Parameter Estimation of Structural Systems Possessing One or Two Nonlinear Normal Modes". Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/29477.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this Dissertation, we develop, and provide proof of principle for, parameter identification techniques for structural systems that can be described in terms of one or two nonlinear normal modes. We model the dynamics of these modes by second-order ordinary-differential equations based on the principles of mechanics, past experience, and engineering judgment. We perform a number of separate experiments on a two-mass structure using several different types of excitation. For the linear tests, the theoretical system response is known in closed-form. For the nonlinear test, we use the method of multiple scales to determine second-order uniform expansions of the model equations and hence determine the approximations to responses of the structure. Then, we estimate the linear and nonlinear parameters by regressive fits between the theoretically and experimentally obtained response relations. We report deviations and agreements between model and experiment.
Ph. D.
29

Bosman, Riëtte. "Threshold estimation in normal and impaired ears using Auditory Steady State Responses". Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd-10282004-080444.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Wang, Sheng. "Regularized skewness parameter estimation for multivariate skew normal and skew t distributions". Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6875.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The skewed normal (SN) distribution introduced by Azzalini has opened a new era for analyzing skewed data. The idea behind it is that it incorporates a new parameter regulating shape and skewness on the symmetric Gaussian distribution. This idea was soon extended to other symmetric distributions such as the Student's t distribution, resulting in the invention of the skew t (ST) distribution. The multivariate versions of the two distributions, i.e. the multivariate skew normal (MSN) and multivariate skew t (MST) distributions, have received considerable attention because of their ability to t skewed data, together with some other properties such as mathematical tractability. While many researchers focus on tting the MSN and MST dis- tributions to data, in this thesis we address another important aspect of statistical modeling using those two distributions, i.e. skewness selection and estimation. Skewness selection, as we discuss it here, means identifying which components of the skewness parameter in the MSN and MST distributions are zero. In this thesis, we begin by reviewing some important properties of the two distributions and then we describe the obstacles that block us from doing skewness selection in the direct parameterizations of the two distributions. Then, to circumvent those obstacles, we intro- duce a new parameterization to use for skewness selection. The nice properties of this new parameterization are also summarized. After introduction of the new parameterization, we discuss our proposed methods to reach the goal of skewness selection. Particularly, we consider adding appropriate penalties to the loss functions of the MSN and MST distributions, represented in the new parameterization of the two distributions. Technical details such as initial value selection and tuning parameter selection are also discussed. Asymptotic consistency and oracle property of some of our methods are constructed. In the later part of the thesis, we include results from some simulation studies in order to assess the performance of our proposed methods. Also, we apply our methods to three data sets. Lastly, some drawbacks and potential future work are discussed.
31

Kelbick, Nicole DePriest. "Detecting underlying emotional sensitivity in bereaved children via a multivariate normal mixture distribution". Columbus, Ohio : Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc%5fnum=osu1064331329.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xiv, 122 p.; also contains graphics. Includes abstract and vita. Advisor: Joseph, Dept. of Statistics. Includes bibliographical references (p. 119-122).
32

Castillo, Madeleine Rocio Medrano. "Estimador de estado e parâmetros de linha de transmissão, baseado nas equações normais". Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-12122006-144603/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
O processo de estimação de estado em sistemas elétricos de potência está sujeito a três tipos de erros: erros nas medidas analógicas (erros grosseiros); erros devido a informações erradas quanto aos estados de chaves e/ou disjuntores (erros topológicos) e erros causados por informações erradas de algum parâmetro do sistema (erros de parâmetros). É drástico o efeito de erros de parâmetros, para o processo de estimação de estado, normalmente intolerável, sendo, entretanto, menos evidente que os erros grosseiros e topológicos. Aproveitando o fato de que certas medidas não sofrem mudanças significativas de valor, durante um determinado intervalo de tempo, propõe-se uma metodologia para estimação de estado e parâmetros de linhas de transmissão. Na metodologia proposta, que se baseia nas equações normais, o vetor de estado convencional é aumentado para a inclusão dos parâmetros a serem estimados. Este vetor de estado aumentado é então estimado através de uma grande quantidade de medidas, obtidas em diversas amostras, durante um intervalo de tempo em que as variáveis de estado do sistema não tenham sofrido alterações significativas de valor. Esta situação ocorre tipicamente à noite, fora dos horários de pico. Propõe-se também uma metodologia para análise de observabilidade para o estimador proposto. Para comprovar a eficiência das metodologias propostas, vários testes foram realizados, utilizando os sistemas de 6, 14 e 30 barras do IEEE.
The process of power system state estimation is subjected to three types of errors: errors in analogical measurements (gross errors), incorrect information about the status of switching devices (topology errors) and incorrect information about the model of the systems equipment (parameter errors). The effects of parameter errors on the process of power system state estimation are drastic and less evident to detect than gross and topology errors. Taking advantage of the fact that a certain fraction of the measurements varies over a small range in a certain period of time, a methodology to estimative transmission line parameters and state based on normal equations has been proposed. In such methodology, which is based on normal equations, the traditional state vector is expanded to include the parameters to be estimated. This augmented state vector is estimated through a large collection of measurements, recorded within several snapshots of the power system, during which the actual system state varies over a small range. This situation typically occurs during the night off-peak periods. An observability analysis methodology is also proposed for the presented estimator. To prove the efficiency of the methodologies, several tests were made using the systems of 6, 14 and 30 buses from IEEE.
33

Kelly, John Kip. "Estimation of Behavioral Thresholds in Normal Hearing Listeners Using Auditory Steady State Responses". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1237559225.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM". Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between sensor data, such as the Normal Distribution Transform SLAM algorithm. The algorithm separates the point cloud data into a voxelgrid and represent each voxel as a normal distribution, effectively decreasing the amount of data points. Registration is based on a function which converges to a minimum. Sub-optimal conditions can cause the function to converge at a local minimum. To remedy this problem this thesis explores the benefits of combining IMU sensor data to estimate the pose to be used in the NDT SLAM algorithm.
35

Segerfors, Ted. "Spurious Heavy Tails". Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168199.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Since the financial crisis which started in 2007, the risk awareness in the financial sector is greater than ever. Financial institutions such as banks and insurance companies are heavily regulated in order to create a harmonic and resilient global economic environment. Sufficiently large capital buffers may protect institutions from bankruptcy due to some adverse financial events leading to an undesirable outcome for the company. In many regulatory frameworks, the institutions are obliged to estimate high quantiles of their loss distributions. This is relatively unproblematic when large samples of relevant historical data are available. Serious statistical problems appear when only small samples of relevant data are available. One possible solution would be to pool two or more samples that appear to have the same distribution, in order to create a larger sample. This thesis identifies the advantages and risks of pooling of small samples. For some mixtures of normally distributed samples, with what is considered to be the same variances, the pooled data may indicate heavy tails. Since a finite mixture of normally distributed samples has light tails, this is an example of spurious heavy tails. Even though two samples may appear to have the same distribution function it is not necessarily better to pool the samples in order to obtain a larger sample size with the aim of more accurate quantile estimation. For two normally distributed samples of sizes m and n and standard deviations s and v, we find that when v=s is approximately 2, n+m is less than 100 and m=(m+n) is approximately 0.75, then there is a considerable risk of believing that the two samples have equal variance and that the pooled sample has heavy tails.
Efter den finansiella krisen som hade sin start 2007 har riskmedvetenheten inom den finansiella sektorn ökat. Finansiella institutioner så som banker och försäkringsbolag är noga reglerade och kontrollerade för att skapa en stark och stabil världsekonomi. Genom att banker och försäkringsbolag enligt regelverken måste ha kapitalbuffertar som ska skydda mot konkurser vid oväntade och oönskade händelser skapas en mer harmonisk finansiell marknad. Dessa regelverk som institutionerna måste följa innebär ofta att de ansvariga måste skatta höga kvantiler av institutionens förväntade förlustfunktion. Att skapa en pålitligt modell och sedan skatta höga kvantiler är lätt när det finns mycket relevant data tillgänglig. När det inte finns tillr äckligt med historisk data uppkommer statistiska problem. En lösning på problemet är att poola två eller _era grupper av data som ser ut att komma från samma fördelningsfunktion för att på så sätt skapa en större grupp med historisk data tillgänglig. Detta arbetet går igenom fördelar och risker med att poola data när det inte finns tillräckligt med relevant historisk data för att skapa en pålitlig modell. En viss mix av normalfördelade datagrupper som ser ut att ha samma varians kan uppfattas att komma från tungsvansade fördelningar. Eftersom normalfördelningen inte är en tungsvansad fördelning kan denna missuppfattning skapa problem, detta är ett exempel på falska tunga svansar. Även fast två datagrupper ser ut att komma från samma fördelningsfunktion så är det inte nödvändigtvis bättre att poola dessa grupper för att skapa ett större urval. För två normalfördelade datagrupper med storlekarna m och n och standardavvikelserna s och v, är det farligaste scenariot när v=s är ungefär 2, n+m är mindre än 100 och m=(m+n)är ungefär 0.75. När detta inträffar finns det en signifikant risk att de två datagrupperna ser ut att komma från samma fördelningsfunktion och att den poolade datan innehar tungsvansade egenskaper.
36

Kelly, J. Kip. "Estimation of behavioral hearing thresholds in normal hearing listeners using auditory steady state responses". Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1237559225.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Rivasseau-Jonveaux, Marie-Thérèse. "Appropriation de la temporalité au cours du vieillissement normal et pathologique". Thesis, Nancy 2, 2010. http://www.theses.fr/2010NAN21024/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Pour approcher les altérations de la temporalité dans la maladie d'Alzheimer, nous étudions une population de référence et 22 patients en phase légère à modérément sévère, avec une échelle sémantique de temporalité et une tâche d'estimation de durée d'actions de la vie quotidienne clinique et pour concevoir un paradigme d'exploration en IRMf. Une analyse des questions- réponses d'entretiens à propos du récit d'une journée, a été effectuée chez 6 de ces patients avec un suivi évolutif. Les résultats en sémantique de la temporalité sont influencés par âge et niveau culturel, indépendants du genre et significativement altérés chez les patients. Les connaissances sur la segmentation temporelle, indépendantes du niveau culturel, sont les moins atteintes. Nous conjecturons que l'estimation de durée de tâches de la vie quotidienne préservée ferait appel à la mémoire procédurale. L?analyse de contenu des entretiens montre l'atteinte des repères chronologiques et de manière plus marquée celle de l'estimation des durées. Elle éclaire les conduites interlocutoires de fuite mises en oeuvre pour "masquer" le handicap mnémonique et les représentations défaillantes. L'analyse de contenu des entretiens se révèle constituer une méthode adaptée pour comprendre en profondeur la représentation vécue du temps par le patient et analyser son rapport dans l'interaction avec ses difficultés. Il en découle des perspectives pour le diagnostic et le suivi des altérations de la temporalité. Une démarche raisonnée de réhabilitation cognitive, l'élaboration de stratégies d'aide aux patients spécifiques, le tutoring des interventions des proches et des professionnels, peuvent s'appuyer sur ce travail
To approach the representation of time and perturbations in time perception for daily activities in mild and moderate Alzheimer's disease patients, we study a temporal semantic knowledge scale and a time duration estimation about daily activities test, in a control group and in a group of 22 patients. The patients also had a general neuropsychological assessment. We conducted with 6 of these patients a semi structured interview. These 6 patients were followed-up with an other similar evaluation. We create a paradigm with the same material to explore brain areas involved in time duration estimation of daily activities. Age and cultural level, but not gender, influence the semantic knowledge scale, which is significatively impaired in patients. Temporal segmentation, independant from cultural level, appears to be the best preserved acquisitions. There is no difference in time duration estimation test about daily activities between controls and patients who seems to use their preserved procedural memory process connected to the scripts of the actions knowledge. The analysis of the interviews using interlocutory logic, gave us a deep understanding of how the patients experience time. Their perception of time is impaired. The estimation duration is significatively more impaired than chronology. The alterations of temporality in Alzheimer's disease have serious impact in daily life of patients and their caregivers. Results will be helpful to develop their diagnosis, elaborate strategies to help the patients and for tutoring interventions for caregivers and professionnals
38

Anastasiou, Andreas. "Bounds for the normal approximation of the maximum likelihood estimator". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:c078fc46-7ed7-4e02-9a68-4608acba4bd2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The asymptotic normality of the maximum likelihood estimator (MLE) under regularity conditions is a long established and famous result. This is a qualitative result and the assessment of such a normal approximation is our main interest. For this task we partly use Stein's method, which is a probabilistic technique that can be used to explicitly measure the distributional distance between two distributions. Since its first appearance in 1972, the method has been developed for various distributions; here we use the results related to Stein's method for normal approximation. In this thesis, we derive explicit upper bounds on the distributional distance between the distribution of the MLE and the normal distribution. First, the focus is on independent and identically distributed random variables from both discrete and continuous single-parameter distributions with particular attention to exponential families. For discrete distributions, the case where the MLE can be on the boundary of the parameter space is treated through a perturbation approach, which allows us to obtain bounds on the distributional distance of interest. The bounds are of order n^(-0.5), where n is the number of observations. Simulation-based results are given to illustrate the power of the bound. Furthermore, often the MLE can not be obtained analytically and optimisation methods (such as the Newton-Raphson algorithm) are used. Even in such cases, order n^(-0.5) bounds are given for the distributional distance related to the MLE. The case of multi-parameter distributions follows smoothly after the detailed discussion related to a scalar parameter. Apart from extending our approach to a multi-parameter setting, we also cover the case of independent but not necessarily identically distributed (i.n.i.d.) random vectors with specific focus on the widely applicable linear regression models. Going back to the single-parameter setting a different approach to get an upper bound on the distributional distance between the distribution of the MLE and the normal distribution, based on the Delta method, is also developed. The MLE for a Generalised Gamma distribution gives an illustration of the results obtained through this Delta method approach. Finally, we relax the independence assumption and results for the case of locally dependent random variables are obtained. An example of correlated sums of normally distributed random variables illustrates the bounds. Again, results that do not require an analytic expression of the MLE to be known are given. We end this thesis with ideas currently in progress and further open research questions.
39

Maniparambil-Eapen, Abraham. "The Impact of Non-Reading Language Performance on the Estimation of Premorbid IQ among Normal Elderly Individuals". Wright State University Professional Psychology Program / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wsupsych1309450983.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Uchiyama, Hiroyuki, Tomokazu Takahashi, Ichiro Ide e Hiroshi Murase. "Frame Registration of In-vehicle Normal Camera with Omni-directional Camera for Self-position Estimation". IEEE, 2008. http://hdl.handle.net/2237/12052.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Tazhibi, Mehdi. "Estimation of the parameters in the truncated normal distribution when the truncation point is known". Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242267.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Dhasarathy, Deepak. "Estimation of vertical load on a tire from contact patch length and its use in vehicle stability control". Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/33559.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The vertical load on a moving tire was estimated by using accelerometers attached to the inner liner of a tire. The acceleration signal was processed to obtain the contact patch length created by the tire on the road surface. Then an appropriate equation relating the patch length to the vertical load is used to calculate the load. In order to obtain the needed data, tests were performed on a flat-track test machine at the Goodyear Innovation Center in Akron, Ohio; tests were also conducted on the road using a trailer setup at the Intelligent Transportation Laboratory in Danville, Virginia. During the tests, a number of different loads were applied; the tire-wheel setup was run at different speeds with the tire inflated to two different pressures. Tests were also conducted with a camber applied to the wheel. An algorithm was developed to estimate load using the collected data.

It was then shown how the estimated load could be used in a control algorithm that applies a suitable control input to maintain the yaw stability of a moving vehicle. A two degree of freedom bicycle model was used for developing the control strategy. A linear quadratic regulator (LQR) was designed for the purpose of controlling the yaw rate and maintaining vehicle stability.
Master of Science

43

MONTEIRO, Michelle Aparecida Corrêa. "Distribuição normal de Kumaraswamy bivariada". Universidade Federal de Alfenas, 2015. https://bdtd.unifal-mg.edu.br:8443/handle/tede/833.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A distribuição normal é a mais importante distribuição de probabilidade, usada na modelagem de dados contínuos. Entretanto, há casos em que a suposição da distribuição relacionada ao modelo normal é violada e a busca por outras distribuições que modelem esses casos se faz necessário. Um dos pontos que pode justificar a ausência de normalidade é a falta de simetria. Uma distribuição que tem como principal característica modelar dados de comportamento assimétrico é a Kumaraswamy. A junção da flexibilidade de modelar dados assimétricos da distribuição de Kumaraswamy com distribuições conhecidas, tais como normal e weibull, permitiu a criação de uma família de distribuições generalizadas. As distribuições multivariadas destacam-se pela importância de aplicações na modelagem de dados em diversas área do conhecimento. No entanto, observa-se a existência de poucas distribuições que modelem caudas mais pesadas e situações de assimetria. Este trabalho teve como objetivo, estudar a classe de distribuições generalizadas de Kumaraswamy, deduzir a distribuição normal de Kumaraswamy bivariada, apresentar a função de verossimilhança e as expressões de seus estimadores. Implementou-se o procedimento de estimação com uso das funções escores no software R e uma abordagem de simulação. Foram avaliadas a estimação de dados simulados e também aplicação em exemplos reais com distribuição assimétrica. Conclui-se, portanto que, a distribuição normal de Kumaraswamy bivariada foi deduzida em relação à sua função de densidade conjunta, marginais, condicionais e implementada para o estudo de simulação. Os estimadores comportaram de maneira precisa, consistente e não tendenciosa. A distribuição normal de Kumarawamy bivariada se ajustou satisfatoriamente aos dados reais de temperatura média e precipitação total.
The normal distribution is the most important probability distribution, used in modeling of continuous data. However, there are cases where the assumption of distribution related to normal model is violated and the search for other distributions that model these cases is necessary. One of the points that can justify the absence of normality is the lack of symmetry. A distribution whose main characteristic shape asymmetric behavior data is Kumaraswamy. The combination of the flexibility of the modeling asymmetric data distribution Kumaraswamy with known distributions, such as normal andWeibull, enabled the creation of a family of generalized distributions. The multivariate distributions we highlight the importance of applications in data modeling in various field of knowledge. However, there is the existence of few distributions that model heavier tails and asymmetry situations. This study aimed to study the class of generalized distributions Kumaraswamy deduct the normal distribution bivariate Kumaraswamy, present the likelihood function and the expressions of their estimators. Implemented the estimation procedure using the scores functions in textit software R and a simulation approach. We evaluated the simulated data estimation and also in real application examples with asymmetric distribution. It can be concluded therefore that the normal distribution bivariate Kumaraswamy was deduced in relation to their joint density function, marginal, conditional and implemented for the simulation study. The estimators behaved precisely, consistent and unbiased. The normal distribution bivariate Kumarawamy adjusted satisfactorily to the actual data of average temperature and total precipitation.
Programa Institucional de Bolsas de Pós-Graduação - PIB-PÓS
44

Rodriguez, Joan Neylo da Cruz. "Estimação em modelos funcionais com erro normais e repetições não balanceadas". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-19022013-171605/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Esta dissertação compreende um estudo da eficiência de estimadores dos parâmetros no modelo funcional com erro nas variáveis, com repetições para contornar o problema de falta de identificação. Nela, discute-se os procedimentos baseados nos métodos de máxima verossimilhança e escore corrigido. As estimativas obtidas pelos dois métodos levam a resultados similares.
This work is concerned with a study on the efficiency of parameter estimates in the functional linear relashionship with constant variances. Where the lack of identification is resolved of by considering replications. Estimation is dealt with by using maximum likelihood and the corrected score approach. Comparisons between the approaches are illustrated by using simulated data.
45

Sharpnack, James. "Graph Structured Normal Means Inference". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/246.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and high-dimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social networks. We will develop asymptotic guarantees of the performance of statistical estimators and tests, by stating conditions for consistency by properties of the graph (e.g. graph spectra). The goal of this thesis is to demonstrate theoretically that by exploiting the graph structure one can achieve statistical consistency in extremely noisy conditions. We begin with the study of a projection estimator called Laplacian eigenmaps, and find that eigenvalue concentration plays a central role in the ability to estimate graph structured patterns. We continue with the study of the edge lasso, a least squares procedure with total variation penalty, and determine combinatorial conditions under which changepoints (edges across which the underlying signal changes) on the graph are recovered. We will shift focus to testing for anomalous activations in the graph, using the scan statistic relaxations, the spectral scan statistic and the graph ellipsoid scan statistic. We will also show how one can form a decomposition of the graph from a spanning tree which will lead to a test for activity in the graph. This will lead to the construction of a spanning tree wavelet basis, which can be used to localize activations on the graph.
46

Perbal-Hatif, Séverine. "Estimation du temps, vitesse de traitement de l'information et mémoire : approche neuropsychologique". Paris 6, 2002. http://www.theses.fr/2002PA066291.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Reinhammar, Ragna. "Estimation of Regression Coefficients under a Truncated Covariate with Missing Values". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385672.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
By means of a Monte Carlo study, this paper investigates the relative performance of Listwise Deletion, the EM-algorithm and the default algorithm in the MICE-package for R (PMM) in estimating regression coefficients under a left truncated covariate with missing values. The intention is to investigate whether the three frequently used missing data techniques are robust against left truncation when missing values are MCAR or MAR. The results suggest that no technique is superior overall in all combinations of factors studied. The EM-algorithm is unaffected by left truncation under MCAR but negatively affected by strong left truncation under MAR. Compared to the default MICE-algorithm, the performance of EM is more stable across distributions and combinations of sample size and missing rate. The default MICE-algorithm is improved by left truncation but is sensitive to missingness pattern and missing rate. Compared to Listwise Deletion, the EM-algorithm is less robust against left truncation when missing values are MAR. However, the decline in performance of the EM-algorithm is not large enough for the algorithm to be completely outperformed by Listwise Deletion, especially not when the missing rate is moderate. Listwise Deletion might be robust against left truncation but is inefficient.
48

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades". Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Praxedes, Kelvin da Cruz. "Estudo de modelos de banda larga para estimativa da irradia??o direta normal". PROGRAMA DE P?S-GRADUA??O EM ENGENHARIA MEC?NICA, 2017. https://repositorio.ufrn.br/jspui/handle/123456789/24419.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-12-01T23:35:59Z No. of bitstreams: 1 KelvinDaCruzPraxedes_DISSERT.pdf: 2131591 bytes, checksum: 6d84d97152440ede44c629fc9e61855d (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-12-05T23:21:49Z (GMT) No. of bitstreams: 1 KelvinDaCruzPraxedes_DISSERT.pdf: 2131591 bytes, checksum: 6d84d97152440ede44c629fc9e61855d (MD5)
Made available in DSpace on 2017-12-05T23:21:49Z (GMT). No. of bitstreams: 1 KelvinDaCruzPraxedes_DISSERT.pdf: 2131591 bytes, checksum: 6d84d97152440ede44c629fc9e61855d (MD5) Previous issue date: 2017-08-25
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)
O presente estudo tem por principal objetivo estudar doze modelos de banda larga para c?u limpo para estimativa de Irradia??o Direta Normal (IDN), e escolher o mais adequado para as condi??es clim?ticas da cidade de Natal-RN. Para tal, fez-se necess?ria uma revis?o acerca dos principais m?todos de estimativa e medi??o da IDN para determinar qual o melhor entre esses, e utiliza-lo na pesquisa. Em seguida estimou-se o valor do coeficiente de turbidez de Angstrom (?), utilizando um modelo matem?tico de parametriza??o proposto por Louche et al. (1987). Para a estimativa de ? e o c?lculo dos valores de IDN dos modelos utilizados no trabalho, tomou-se como base dados da esta??o meteorol?gica do CTG?S-ER, localizada em Natal-RN (altitude 84 m; latitude 5,82?S, longitude 35,23?W). A esta??o solar tamb?m forneceu dados de IDN mensurados a partir de um pireli?metro, onde foi poss?vel comparar os resultados obtidos a partir dos modelos com os mensurados, e verificar a viabilidade desses modelos para as condi??es clim?ticas da regi?o estudada. O valor de ? ? apresentado neste trabalho em m?dias mensais. O resultado obtido segue em partes o padr?o esperado, que ? de menores valores de ? nos meses de inverno, e maiores valores de ? nos meses de ver?o. Os meses de Dezembro e Janeiro destoam desse padr?o, devido principalmente ? quantidade de chuvas incidente nesses meses ser superior em mais de 40% para o m?s de Janeiro, e de 200% para o m?s de Dezembro, quando comparado com a m?dia dos ?ltimos dez anos. Os valores m?nimos e m?ximos apresentados para ? s?o de 0,094 em Julho e de 0,128 em Mar?o. Essa proximidade de valores ? decorrente de n?o haver uma varia??o consider?vel no valor de espessura de precipita??o de ?gua ao longo do ano, principalmente pelo fato da temperatura m?dia em Natal ser praticamente a mesma ao longo de todo o ano. Com rela??o aos modelos de banda larga, estes foram divididos em dois grupos para an?lise, considerando a quantidade de par?metros de entrada. Os que precisam de quatro ou menos par?metros s?o considerados modelos simples, e os que precisam de mais de quatro par?metros, modelos complexos. Para valida??o dos resultados encontrados, utilizou-se seis m?todos estat?sticos, o Erro M?dio Bias (MBE), a Raiz Quadr?tica do Erro M?dio (RMSE), o Teste t-estat?stico (TT), a Incerteza Expandida (U95), o Coeficiente de Correla??o de Pearson (r) e o Coeficiente de Determina??o (R2). Os modelos foram classificados e rankeados de acordo o desempenho de cada um dos m?todos estat?sticos, calculados atrav?s do Indicador de Performance Global (IPG). Os resultados obtidos mostram que, para a cidade de Natal, a quantidade de par?metros n?o determina o desempenho dos modelos, j? que os modelos simples obtiveram um dos tr?s melhores resultados, e tamb?m os dois piores.
The present study has as main objective to study twelve broadband models for clean sky to estimate Direct Normal Irradiation (IDN), and to choose the most suitable for the climatic conditions of the city of Natal-RN. To do this, a revision was necessary about the main methods of estimation and measurement of IDN to determine the best among them, and used it in the research. Then the value of the Angstrom turbidity coefficient (?) was estimated using a mathematical model of parameterization proposed by Louche et al. (1987). For the estimation of ? and the calculation of the IDN values of the models used in the study, data from the CTG?S-ER meteorological station, located in Natal-RN (altitude 84 m, latitude 5.82 ? S, longitude 35.23 ? W). The solar station also provided IDN data measured from a pyreliometer, where it was possible to compare the results obtained from the models with those measured, and verify the viability of these models for the climatic conditions of the studied region. The value of ? is presented in this work in monthly averages. The obtained result follows in parts the expected pattern, which is of lower values of ? in the winter months, and higher values of ? in the summer months. The months of December and January fall short of this pattern, mainly due to the amount of rainfall in these months being more than 40% higher in January and 200% in December compared to the average of the last ten years. The minimum and maximum values presented for ? are 0.094 in July and 0.128 in March. This proximity of values is due to the fact that there is not a considerable variation in the value of water precipitation thickness throughout the year, mainly because the average temperature in Natal be practically the same throughout the year. Regarding broadband models, these were divided into two groups for analysis, considering the number of input parameters. Those that need four or fewer parameters are considered simple models, and those that need more than four parameter complex models. In order to validate the results, we used six statistical methods, the Mean Bias Error (MBE), the Mean Error Root Mean (RMSE), the t-statistic (TT), the Expanded Uncertainty (U95), the Coefficient of Pearson correlation (r) and the Determination Coefficient (R2). The models were classified and ranked according to the performance of each of the statistical methods, calculated through the Global Performance Indicator (GPI). The results show that for the city of Natal, the number of parameters does not determine the performance of the models, since the simple models obtained one of the three best results, and also the two worst ones.
50

Karlsson, Emil. "Explicit Estimators for a Banded Covariance Matrix in a Multivariate Normal Distribution". Thesis, Linköpings universitet, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-107194.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The problem of estimating mean and covariances of a multivariate normal distributedrandom vector has been studied in many forms. This thesis focuses on the estimatorsproposed in [15] for a banded covariance structure with m-dependence. It presents theprevious results of the estimator and rewrites the estimator when m = 1, thus makingit easier to analyze. This leads to an adjustment, and a proposition for an unbiasedestimator can be presented. A new and easier proof of consistency is then presented.This theory is later generalized into a general linear model where the correspondingtheorems and propositions are made to establish unbiasedness and consistency. In thelast chapter some simulations with the previous and new estimator verifies that thetheoretical results indeed makes an impact.

Vai alla bibliografia