Дисертації з теми "Robust fitting"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Robust fitting.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-18 дисертацій для дослідження на тему "Robust fitting".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Xing, Yanru. "Robust mixture regression model fitting by Laplace distribution." Kansas State University, 2013. http://hdl.handle.net/2097/16534.

Повний текст джерела
Анотація:
Master of Science
Department of Statistics
Weixing Song
A robust estimation procedure for mixture linear regression models is proposed in this report by assuming the error terms follow a Laplace distribution. EM algorithm is imple- mented to conduct the estimation procedure of missing information based on the fact that the Laplace distribution is a scale mixture of normal and a latent distribution. Finite sample performance of the proposed algorithm is evaluated by some extensive simulation studies, together with the comparisons made with other existing procedures in this literature. A sensitivity study is also conducted based on a real data example to illustrate the application of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Truong, Ha-Giang. "Robust fitting: Assisted by semantic analysis and reinforcement learning." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2567.

Повний текст джерела
Анотація:
Many computer vision applications require robust model estimation from a set of observed data. However, these data usually contain outliers, due to imperfect data acquisition or pre-processing steps, which can reduce the performance of conventional model-fitting methods. Robust fitting is thus critical to make the model estimation robust against outliers and reach stable performance. All of the contributions made in this thesis are for maximum consensus. In robust model fitting, maximum consensus is one of the most popular criteria, which aims to estimate the model that is consistent to as many observations as possible, i.e. obtain the highest consensus. The thesis makes contributions in two aspects of maximum consensus, one is non-learning based approaches and the other is learning based approaches. The first motivation for our work is the remarkable progress in semantic segmentation in recent years. Semantic segmentation is a useful process and is usually available for scene understanding, medical image analysis, and virtual reality. We propose novel methods, which make use of semantic segmentation, to improve the efficiency of two robust non-learning based algorithms. Another motivation for our contributions is the advances in reinforcement learning. In the thesis, a novel unsupervised learning framework is proposed to learn (without labelled data) to solve robust estimation directly. In particular, we formulate robust fitting problem as a special case of goal-oriented learning, and adopt the Reinforcement Learning framework as the basis of our approach. Our approach is agnostic to the input features and can be generalized to various practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Hanzi. "Robust statistics for computer vision : model fitting, image segmentation and visual motion analysis." Monash University, Dept. of Electrical and Computer Systems Engineering, 2004. http://arrow.monash.edu.au/hdl/1959.1/5345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yang, Li. "Robust fitting of mixture of factor analyzers using the trimmed likelihood estimator." Kansas State University, 2014. http://hdl.handle.net/2097/18118.

Повний текст джерела
Анотація:
Master of Science
Department of Statistics
Weixin Yao
Mixtures of factor analyzers have been popularly used to cluster the high dimensional data. However, the traditional estimation method is based on the normality assumptions of random terms and thus is sensitive to outliers. In this article, we introduce a robust estimation procedure of mixtures of factor analyzers using the trimmed likelihood estimator (TLE). We use a simulation study and a real data application to demonstrate the robustness of the trimmed estimation procedure and compare it with the traditional normality based maximum likelihood estimate.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mordini, Nicola. "Multicentre study for a robust protocol in single-voxel spectroscopy: quantification of MRS signals by time-domain fitting algorithms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7579/.

Повний текст джерела
Анотація:
Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Willersjö, Nyfelt Emil. "Comparison of the 1st and 2nd order Lee–Carter methods with the robust Hyndman–Ullah method for fitting and forecasting mortality rates." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48383.

Повний текст джерела
Анотація:
The 1st and 2nd order Lee–Carter methods were compared with the Hyndman–Ullah method in regards to goodness of fit and forecasting ability of mortality rates. Swedish population data was used from the Human Mortality Database. The robust estimation property of the Hyndman–Ullah method was also tested with inclusion of the Spanish flu and a hypothetical scenario of the COVID-19 pandemic. After having presented the three methods and making several comparisons between the methods, it is concluded that the Hyndman–Ullah method is overall superior among the three methods with the implementation of the chosen dataset. Its robust estimation of mortality shocks could also be confirmed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Relvas, Carlos Eduardo Martins. "Modelos parcialmente lineares com erros simétricos autoregressivos de primeira ordem." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-28052013-182956/.

Повний текст джерела
Анотація:
Neste trabalho, apresentamos os modelos simétricos parcialmente lineares AR(1), que generalizam os modelos parcialmente lineares para a presença de erros autocorrelacionados seguindo uma estrutura de autocorrelação AR(1) e erros seguindo uma distribuição simétrica ao invés da distribuição normal. Dentre as distribuições simétricas, podemos considerar distribuições com caudas mais pesadas do que a normal, controlando a curtose e ponderando as observações aberrantes no processo de estimação. A estimação dos parâmetros do modelo é realizada por meio do critério de verossimilhança penalizada, que utiliza as funções escore e a matriz de informação de Fisher, sendo todas essas quantidades derivadas neste trabalho. O número efetivo de graus de liberdade e resultados assintóticos também são apresentados, assim como procedimentos de diagnóstico, destacando-se a obtenção da curvatura normal de influência local sob diferentes esquemas de perturbação e análise de resíduos. Uma aplicação com dados reais é apresentada como ilustração.
In this master dissertation, we present the symmetric partially linear models with AR(1) errors that generalize the normal partially linear models to contain autocorrelated errors AR(1) following a symmetric distribution instead of the normal distribution. Among the symmetric distributions, we can consider heavier tails than the normal ones, controlling the kurtosis and down-weighting outlying observations in the estimation process. The parameter estimation is made through the penalized likelihood by using score functions and the expected Fisher information. We derive these functions in this work. The effective degrees of freedom and asymptotic results are also presented as well as the residual analysis, highlighting the normal curvature of local influence under different perturbation schemes. An application with real data is given for illustration.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Messina, Carl J. "Labeled sampling consensus a novel algorithm for robustly fitting multiple structures using compressed sampling." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4983.

Повний текст джерела
Анотація:
The ability to robustly fit structures in datasets that contain outliers is a very important task in Image Processing, Pattern Recognition and Computer Vision. Random Sampling Consensus or RANSAC is a very popular method for this task, due to its ability to handle over 50% outliers. The problem with RANSAC is that it is only capable of finding a single structure. Therefore, if a dataset contains multiple structures, they must be found sequentially by finding the best fit, removing the points, and repeating the process. However, removing incorrect points from the dataset could prove disastrous. This thesis offers a novel approach to sampling consensus that extends its ability to discover multiple structures in a single iteration through the dataset. The process introduced is an unsupervised method, requiring no previous knowledge to the distribution of the input data. It uniquely assigns labels to different instances of similar structures. The algorithm is thus called Labeled Sampling Consensus or L-SAC. These unique instances will tend to cluster around one another allowing the individual structures to be extracted using simple clustering techniques. Since divisions instead of modes are analyzed, only a single instance of a structure need be recovered. This ability of L-SAC allows a novel sampling procedure to be presented "compressing" the required samples needed compared to traditional sampling schemes while ensuring all structures have been found. L-SAC is a flexible framework that can be applied to many problem domains.
ID: 030423298; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.E.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 70-72).
M.S.E.E.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Laranjeira, Moreira Matheus. "Visual servoing on deformable objects : an application to tether shape control." Electronic Thesis or Diss., Toulon, 2019. http://www.theses.fr/2019TOUL0007.

Повний текст джерела
Анотація:
Cette thèse porte sur le problème du contrôle de la forme d'ombilicaux pour des robots sous-marins légers téléopérés (mini-ROVs), qui conviennent, grâce à leur petite taille et grande manoeuvrabilité, à l'exploration des eaux peu profondes et des espaces encombrés. La régulation de la forme de l'ombilical est cependant un tâche difficile, car ces robots n'ont pas une puissance de propulsion suffisante pour contrebalancer les forces de traînée du câble. Pour faire face à ce problème, nous avons introduit le concept de Cordée de mini-ROVs, dans lequel plusieurs robots sont reliés à l'ombilical et peuvent, ensemble, contrebalancer les perturbations extérieures et contrôler la forme du câble. Nous avons étudié l'utilisation des caméras embarquées pour réguler la forme d'une portion de l'ombilical reliant deux robots successifs, un leader et un suiveur. Seul le robot suiveur se chargera de la tâche de régulation de la forme du câble. Le leader est libéré pour explorer ses alentours. L'ombilical est supposé être légèrement pesant et donc modélisé par une chaînette. Les paramètres de forme du câble sont estimés en temps réel par une procédure d'optimisation non-linéaire qui adapte le modèle de chaînette aux points détectés dans les images des caméras. La régulation des paramètres de forme est obtenue grâce à une commande reliant le mouvement du robot à la variation de la forme de l'ombilical. L'asservissement visuel proposé s'est avéré capable de contrôler correctement la forme du câble en simulations et expériences réalisées en basin
This thesis addresses the problem of tether shape contrai for small remotely operated underwater vehicles (mini-ROVs), which are suitable, thanks to their small size and high maneuverability, for the exploration of shallow waters and cluttered spaces. The management of the tether is, however, a hard task, since these robots do not have enough propulsion power to counterbalance the drag forces acting on the tether cable. ln order to cape with this problem, we introduced the concept of a Chain of miniROVs, where several robots are linked to the tether cable and can, together, manage the external perturbations and contrai the shape of the cable. We investigated the use of the embedded cameras to regulate the shape of a portion of tether linking two successive robots, a leader and a follower. Only the follower robot deals with the tether shape regulation task. The leader is released to explore its surroundings. The tether linking bath robots is assumed to be negatively buoyant and is modeled by a catenary. The tether shape parameters are estimated in real-time by a nonlinear optimization procedure that fits the catenary model to the tether detected points in the image. The shape parameter regulation is thus achieved through a catenary-based contrai scheme relating the robot motion with the tether shape variation. The proposed visual servoing contrai scheme has proved to properly manage the tether shape in simulations and real experiments in pool
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yu, Xinming. "Robust estimation for range image segmentation and fitting." Thesis, 1993. http://spectrum.library.concordia.ca/4144/1/NN84686.pdf.

Повний текст джерела
Анотація:
In the dissertation a new robust estimation technique for range image segmentation and fitting has been developed. The performance of the algorithm has been considerably improved by incorporating the genetic algorithm. The new robust estimation method randomly samples range image points and solves equations determined by these points for parameters of selected primitive type. From K samples we measure RESidual Consensus (RESC) to choose one set of sample points which determines an equation best fitting the largest homogeneous surface patch in the current processing region. The residual consensus is measured by a compressed histogram method which can be used at various noise levels. After obtaining surface parameters of the best fitting and the residuals of each point in the current processing region, a boundary list searching method is used to extract this surface patch out of the processing region and to avoid further computation. Since the RESC method can tolerate more than 80% of outliers, it is a substantial improvement over the least median squares method. The method segments range image into planar and quadratic surfaces, and works very well even in smoothly connected curve regions. A genetic algorithm is used to accelerate the random search. A large number of offline average performance experiments on GA are carried out to investigate different types of GAs and the influence of control parameters. A steady state GA works better than a generational replacement GA. The algorithms have been validated on the large set of synthetic and real range images.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Le, Huu Minh. "New algorithmic developments in maximum consensus robust fitting." Thesis, 2018. http://hdl.handle.net/2440/115183.

Повний текст джерела
Анотація:
In many computer vision applications, the task of robustly estimating the set of parameters of a geometric model is a fundamental problem. Despite the longstanding research efforts on robust model fitting, there remains significant scope for investigation. For a large number of geometric estimation tasks in computer vision, maximum consensus is the most popular robust fitting criterion. This thesis makes several contributions in the algorithms for consensus maximization. Randomized hypothesize-and-verify algorithms are arguably the most widely used class of techniques for robust estimation thanks to their simplicity. Though efficient, these randomized heuristic methods do not guarantee finding good maximum consensus estimates. To improve the randomize algorithms, guided sampling approaches have been developed. These methods take advantage of additional domain information, such as descriptor matching scores, to guide the sampling process. Subsets of the data that are more likely to result in good estimates are prioritized for consideration. However, these guided sampling approaches are ineffective when good domain information is not available. This thesis tackles this shortcoming by proposing a new guided sampling algorithm, which is based on the class of LP-type problems and Monte Carlo Tree Search (MCTS). The proposed algorithm relies on a fundamental geometric arrangement of the data to guide the sampling process. Specifically, we take advantage of the underlying tree structure of the maximum consensus problem and apply MCTS to efficiently search the tree. Empirical results show that the new guided sampling strategy outperforms traditional randomized methods. Consensus maximization also plays a key role in robust point set registration. A special case is the registration of deformable shapes. If the surfaces have the same intrinsic shapes, their deformations can be described accurately by a conformal model. The uniformization theorem allows the shapes to be conformally mapped onto a canonical domain, wherein the shapes can be aligned using a M¨obius transformation. The problem of correspondence-free M¨obius alignment of two sets of noisy and partially overlapping point sets can be tackled as a maximum consensus problem. Solving for the M¨obius transformation can be approached by randomized voting-type methods which offers no guarantee of optimality. Local methods such as Iterative Closest Point can be applied, but with the assumption that a good initialization is given or these techniques may converge to a bad local minima. When a globally optimal solution is required, the literature has so far considered only brute-force search. This thesis contributes a new branch-and-bound algorithm that solves for the globally optimal M¨obius transformation much more efficiently. So far, the consensus maximization problems are approached mainly by randomized algorithms, which are efficient but offer no analytical convergence guarantee. On the other hand, there exist exact algorithms that can solve the problem up to global optimality. The global methods, however, are intractable in general due to the NP-hardness of the consensus maximization. To fill the gap between the two extremes, this thesis contributes two novel deterministic algorithms to approximately optimize the maximum consensus criterion. The first method is based on non-smooth penalization supported by a Frank-Wolfe-style optimization scheme, and another algorithm is based on Alternating Direction Method of Multipliers (ADMM). Both of the proposed methods are capable of handling the non-linear geometric residuals commonly used in computer vision. As will be demonstrated, our proposed methods consistently outperform other heuristics and approximate methods.
Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Computer Science, 2018
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tran, Quoc Huy. "Robust parameter estimation in computer vision: geometric fitting and deformable registration." Thesis, 2014. http://hdl.handle.net/2440/86270.

Повний текст джерела
Анотація:
Parameter estimation plays an important role in computer vision. Many computer vision problems can be reduced to estimating the parameters of a mathematical model of interest from the observed data. Parameter estimation in computer vision is challenging, since vision data unavoidably have small-scale measurement noise and large-scale measurement errors (outliers) due to imperfect data acquisition and preprocessing. Traditional parameter estimation methods developed in the statistics literature mainly deal with noise and are very sensitive to outliers. Robust parameter estimation techniques are thus crucial for effectively removing outliers and accurately estimating the model parameters with vision data. The research conducted in this thesis focuses on single structure parameter estimation and makes a direct contribution to two specific branches under that topic: geometric fitting and deformable registration. In geometric fitting problems, a geometric model is used to represent the information of interest, such as a homography matrix in image stitching, or a fundamental matrix in three-dimensional reconstruction. Many robust techniques for geometric fitting involve sampling and testing a number of model hypotheses, where each hypothesis consists of a minimal subset of data for yielding a model estimate. It is commonly known that, due to the noise added to the true data (inliers), drawing a single all-inlier minimal subset is not sufficient to guarantee a good model estimate that fits the data well; the inliers therein should also have a large spatial extent. This thesis investigates a theoretical reasoning behind this long-standing principle, and shows a clear correlation between the span of data points used for estimation and the quality of model estimate. Based on this finding, the thesis explains why naive distance-based sampling fails as a strategy to maximise the span of all-inlier minimal subsets produced, and develops a novel sampling algorithm which, unlike previous approaches, consciously targets all-inlier minimal subsets with large span for robust geometric fitting. The second major contribution of this thesis relates to another computer vision problem which also requires the knowledge of robust parameter estimation: deformable registration. The goal of deformable registration is to align regions in two or more images corresponding to a common object that can deform nonrigidly such as a bending piece of paper or a waving flag. The information of interest is the nonlinear transformation that maps points from one image to another, and is represented by a deformable model, for example, a thin plate spline warp. Most of the previous approaches to outlier rejection in deformable registration rely on optimising fully deformable models in the presence of outliers due to the assumption of the highly nonlinear correspondence manifold which contains the inliers. This thesis makes an interesting observation that, for many realistic physical deformations, the scale of errors of the outliers usually dwarfs the nonlinear effects of the correspondence manifold on which the inliers lie. The finding suggests that standard robust techniques for geometric fitting are applicable to model the approximately linear correspondence manifold for outlier rejection. Moreover, the thesis develops two novel outlier rejection methods for deformable registration, which are based entirely on fitting simple linear models and shown to be considerably faster but at least as accurate as previous approaches.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wong, Hoi Sim. "A preference analysis approach to robust geometric model fitting in computer vision." Thesis, 2013. http://hdl.handle.net/2440/82075.

Повний текст джерела
Анотація:
Robust model fitting is a crucial task in numerous computer vision applications, where the information of interest is often expressed as a mathematical model. The goal of model fitting is to estimate the model parameters that “best” explain the observed data. However, robust model fitting is a challenging problem in computer vision, since vision data are (1) unavoidably contaminated by outliers due to imperfections in data acquisition and preprocessing, and (2) often contain multiple instances (or structures) of a model. Many robust fitting methods involve generating hypotheses using random sampling, and then (1) score the hypotheses using a robust criterion or (2) use a mode seeking or clustering method to elicit the potential structures in the data. Obtaining a good set of hypotheses is crucial for success, however this is often timeconsuming, especially for heavily contaminated data. In addition, many irrelevant hypotheses are unavoidably generated during sampling process. This frequently becomes an obstacle for accurate estimation, and has been ignored in previous methods. In particular, mode seeking-based fitting methods are very sensitive to the proportion of good/bad hypotheses. This thesis proposes several sampling methods for rapid and effective generation of good hypotheses, and hypothesis filtering methods to remove bad hypotheses for accurate estimation. The techniques developed here can be easily integrated into existing fitting methods to significantly improve fitting accuracy. We also propose a hierarchical fitting method, which recognizes that details in real-life data are organized hierarchically (i.e., large structures cascading down to finer structures). This can avoid excessive parameter tuning to obtain a particular fitting result, whereas existing fitting methods often fit data with a single number of structures and permit only one interpretation of the data. The algorithms in this thesis are motivated by preference (or ranking) analysis, which has been widely used in areas such as information retrieval, artificial intelligence and marketing. Preference analysis provides a sophisticated non-parametric approach to analyzing the data and hypotheses in model fitting problems. The algorithms developed here are shown to be more reliable than previous methods, and to perform well in various vision tasks.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2013
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lin, ShihHsiang, and 林士翔. "Exploring the Use of Data Fitting and Clustering Techniques for Robust Speech Recognition." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/62281383807216796339.

Повний текст джерела
Анотація:
碩士
國立臺灣師範大學
資訊教育學系
95
Speech is the primary and the most convenient means of communication between individuals. It is also expected that automatic speech recognition (ASR) will play a more active role and will serve as the major human-machine interface for the interaction between people and different kinds of intelligent electronic devices in the near future. Most of the current state-of-the-art ASR systems can achieve quite high recognition performance levels in controlled laboratory environments. However, as the systems are moved out of the laboratory environments and deployed into real-world applications, the performance of the systems often degrade dramatically due to the reason that varying environmental effects will lead to a mismatch between the acoustic conditions of the training and test speech data. Therefore, robustness techniques have received great importance and attention in recent years. Robustness techniques in general fall into two aspects according to whether the methods’ orientation is either from feature domain or from their corresponding probability distributions. Methods of each have their own superiority and limitations. In this thesis, several attempts were made to integrate these two distinguishing information to improve the current speech robustness methods by using a novel data-fitting scheme. Firstly, cluster-based polynomial-fit histogram equalization (CPHEQ), based on histogram equalization and polynomial regression, was proposed to directly characterize the relationship between the speech feature vectors and their corresponding probability distributions by utilizing stereo speech training data. Moreover, we extended the idea of CPHEQ with some elaborate assumptions, and two different methods were derived as well, namely, polynomial-fit histogram equalization (PHEQ) and selective cluster-based polynomial-fit histogram equalization (SCPHEQ). PHEQ uses polynomial regression to efficiently approximate the inverse of the cumulative density functions of speech feature vectors for HEQ. It can avoid the need of high computation cost and large disk storage consumption caused by traditional HEQ methods. SCPHEQ is based on the missing feature theory and use polynomial regression to reconstruct unreliable feature components. All experiments were carried out on the Aurora-2 database and task. Experimental results shown that for clean-condition training, our method achieved a considerable word error rate reduction over the baseline system and also significantly outperformed the other robustness methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hsiao, Yi, and 蕭奕. "Robust Model Fitting - Selection of Tuning Parameters in the Aspect of Gamma Clustering." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/9escv5.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
應用數學科學研究所
107
In 1995, Windham came out with an idea of weighted distribution in his thesis, Robustifying Model Fitting, and he used the idea to find a mean estimator when there are outliers in the original data. There is a tuning parameter in this estimator, and selecting the parameter will affect the mean estimate in the same data. In the same thesis, he also suggested a criterion of selecting the tuning parameter, but we found out that this criterion wasn’t doing well in some simulations. Considering the problem, we propose another criterion which can derive a better mean estimator. Besides, we can also apply this method to clustering problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yeh, Han-Chun, and 葉漢軍. "A Study of Fitting Local Geoid Model by Robust Weighted Total Least Squares Method -A Case Study of Taichung Area." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/48876762392689670067.

Повний текст джерела
Анотація:
碩士
國立中興大學
土木工程學系所
105
The objective of this study involved using global navigation satellite system (GNSS) data to achieve reasonable point height accuracy. In this study, the orthometric heights of benchmarks were obtained from first order leveling of Taichung city and GNSS measurements of ellipsoid heights underwent fitting. A traditional fitting method was adopted, in which geoid height was built using generalized least squares combined with a curved surface fitting method. However, because generalized least square calculations do not take into consideration random errors that exist in coefficient matrices and observation vectors, weighted total generalized least square-based calculations were performed to solve these problems. In this study, the combination of weighted total generalized least squares and the quadratic curved surface fitting method improved on the traditional method by considering the covariance matrices of coefficient vectors and observation vectors. The solutions of the new model were subsequently analyzed, elevating point height accuracy to ±1.401 cm. The new method satisfies height accuracy requirements demanded in engineering surveys and provides valuable information for regional geoid height research.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Cai, Zhipeng. "Consensus Maximization: Theoretical Analysis and New Algorithms." Thesis, 2020. http://hdl.handle.net/2440/127452.

Повний текст джерела
Анотація:
The core of many computer vision systems is model fitting, which estimates a particular mathematical model given a set of input data. Due to the imperfection of the sensors, pre-processing steps and/or model assumptions, computer vision data usually contains outliers, which are abnormally distributed data points that can heavily reduce the accuracy of conventional model fitting methods. Robust fitting aims to make model fitting insensitive to outliers. Consensus maximization is one of the most popular paradigms for robust fitting, which is the main research subject of this thesis. Mathematically, consensus maximization is an optimization problem. To understand the theoretical hardness of this problem, a thorough analysis about its computational complexity is first conducted. Motivated by the theoretical analysis, novel techniques that improve different types of algorithms are then introduced. On one hand, an efficient and deterministic optimization approach is proposed. Unlike previous deterministic approaches, the proposed one does not rely on the relaxation of the original optimization problem. This property makes it much more effective at refining an initial solution. On the other hand, several techniques are proposed to significantly accelerate consensus maximization tree search. Tree search is one of the most efficient global optimization approaches for consensus maximization. Hence, the proposed techniques greatly improve the practicality of globally optimal consensus maximization algorithms. Finally, a consensus-maximization-based method is proposed to register terrestrial LiDAR point clouds. It demonstrates how to surpass the general theoretical hardness by using special problem structure (the rotation axis returned by the sensors), which simplify the problem and lead to application-oriented algorithms that are both efficient and globally optimal.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hanek, Robert [Verfasser]. "Fitting parametric curve models to images using local self-adapting separation criteria / Robert Hanek." 2004. http://d-nb.info/974166375/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії