Dissertations / Theses on the topic 'Least squares'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Least squares.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Jones, Caroline Erin. "Least squares Gaussian quadrature." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0017/MQ54628.pdf.
Full textHassel, Per Anker. "Nonlinear partial least squares." Thesis, University of Newcastle Upon Tyne, 2003. http://hdl.handle.net/10443/465.
Full textGanssle, Graham. "Stabilized Least Squares Migration." ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2074.
Full textYoung, William Ronald. "Total least squares and constrained least squares applied to frequency domain system identification." Ohio : Ohio University, 1993. http://www.ohiolink.edu/etd/view.cgi?ohiou1176315127.
Full textGuo, Hengdao. "Frequency Tracking and Phasor Estimation Using Least Squares and Total Least Squares Algorithms." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/57.
Full textSantiago, Claudio Prata. "On the nonnegative least squares." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31768.
Full textCommittee Chair: Earl Barnes; Committee Member: Arkadi Nemirovski; Committee Member: Faiz Al-Khayyal; Committee Member: Guillermo H. Goldsztein; Committee Member: Joel Sokol. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Müller, Werner. "On Least Squares Variogram Fitting." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/370/1/document.pdf.
Full textYao, Gang. "Least-squares reverse-time migration." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/14575.
Full textKim, Donggeon. "Least squares mixture decomposition estimation." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-02132009-171622/.
Full textChu, Ka Lok 1975. "Inequalities and equalities associated with ordinary least squares and generalized least squares in partitioned linear models." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85140.
Full textChapter I builds on the observation that in Canner's model the ordinary least squares and generalized least squares regression lines are parallel, which led us to introduce a new measure of efficiency of ordinary least squares and to find conditions for which the total Watson efficiency of ordinary least squares in a partitioned linear model exceeds or is less than the product of the two subset Watson efficiencies, i.e., the product of the Watson efficiencies associated with the two subsets of parameters in the underlying partitioned linear model.
We introduce the notions of generalized efficiency function, efficiency factorization multiplier, and determinantal covariance ratio, and obtain several inequalities and equalities. We give special attention to those partitioned linear models for which the total Watson efficiency of ordinary least squares equals the product of the two subset Watson efficiencies. A key characterization involves the equality between the squares of a certain partial correlation coefficient and its associated ordinary correlation coefficient.
In Chapters II and IV we suppose that the underlying partitioned linear model is weakly singular in that the column space of the model matrix is contained in the column space of the covariance matrix of the errors in the linear model. In Chapter III our results are specialized to partitioned linear models where the partitioning is orthogonal and the covariance matrix of the errors is positive definite.
Kubitz, Jörg. "Gemischte Least-Squares-FEM für Elastoplastizität." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=983832625.
Full textKolev, Tzanio Valentinov. "Least-squares methods for computational electromagnetics." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1115.
Full textBaykal, Buyurman. "Underdetermined recursive least-squares adaptive filtering." Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/7790.
Full textFraley, Christina. "Solution of nonlinear least-squares problems /." Stanford, CA : Dept. of Computer Science, Stanford University, 1987. http://doi.library.cmu.edu/10.1184/OCLC/19613955.
Full text"June 1987." This research was supported in part by Joseph Oliger under Office of Naval Research contract N00014-82-K-0335, by Stanford Linear Accelerator Center and the Systems Optimization Laboratory under Army Research Office contract DAAG29-84-K-0156. Includes bibliographies.
Silva, Aristeguieta Maria. "Optimization of seismic least-squares inversion /." Access abstract and link to full text, 1993. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9325432.
Full textHan, Qing 1980. "Solving constrained integer least squares problems." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98720.
Full textSolving a constrained ILS problem usually has two stages: reduction (or preprocessing) and search. We first present a reduction algorithm and a search algorithm for solving the BILS problem. Unlike the usual reduction algorithms, which use only the information of the generator matrix, the new reduction algorithm also uses the information of the given input vector and the box constraint. The new search algorithm overcomes some shortcomings of the existing search algorithms and gives some other improvements. Then, for solving the EILS problem, we dynamically transfer it to a BILS problem and extend the above new search algorithm. In addition, we suggest using the well-known LLL reduction for preprocessing. For both problems, simulation results indicate the combination of our reduction algorithms and search algorithms can be (much) more efficient than the existing algorithms.
Hawes, Anthony H. "Least squares and adaptive multirate filtering." Thesis, Monterey, California. Naval Postgraduate School, 2012.
Find full textThis thesis addresses the problem of estimating a random process from two observed signals sampled at different rates. The case where the low-rate observation has a higher signal-to- noise ratio than the high-rate observation is addressed. Both adaptive and non-adaptive filtering techniques are explored. For the non-adaptive case, a multirate version of the Wiener-Hopf optimal filter is used for estimation. Three forms of the filter are described. It is shown that using both observations with this filter achieves a lower mean-squared error than using either sequence alone. Furthermore, the amount of training data to solve for the filter weights is comparable to that needed when using either sequence alone. For the adaptive case, a multirate version of the LMS adaptive algorithm is developed. Both narrowband and broadband interference are removed using the algorithm in an adaptive noise cancellation scheme. The ability to remove interference at the high rate using observations taken at the low rate without the high-rate observations is demonstrated.
RENTERIA, RAUL PIERRE. "ALGORITHMS FOR PARTIAL LEAST SQUARES REGRESSION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4362@1.
Full textMuitos problemas da área de aprendizagem automática tem por objetivo modelar a complexa relação existente num sisitema , entre variáveis de entrada X e de saída Y na ausência de um modelo teórico. A regressão por mínimos quadrados parciais PLS ( Partial Least Squares) constitui um método linear para resolução deste tipo de problema , voltado para o caso de um grande número de variáveis de entrada quando comparado com número de amostras. Nesta tese , apresentamos uma variante do algoritmo clássico PLS para o tratamento de grandes conjuntos de dados , mantendo um bom poder preditivo. Dentre os principais resultados destacamos um versão paralela PPLS (Parallel PLS ) exata para o caso de apenas um variável de saída e um versão rápida e aproximada DPLS (DIRECT PLS) para o caso de mais de uma variável de saída. Por outro lado ,apresentamos também variantes para o aumento da qualidade de predição graças à formulação não linear. São elas o LPLS ( Lifted PLS ), algoritmo para o caso de apenas uma variável de saída, baseado na teoria de funções de núcleo ( kernel functions ), uma formulação kernel para o DPLS e um algoritmo multi-kernel MKPLS capaz de uma modelagemmais compacta e maior poder preditivo, graças ao uso de vários núcleos na geração do modelo.
The purpose of many problems in the machine learning field isto model the complex relationship in a system between the input X and output Y variables when no theoretical model is available. The Partial Least Squares (PLS)is one linear method for this kind of problem, for the case of many input variables when compared to the number of samples. In this thesis we present versions of the classical PLS algorithm designed for large data sets while keeping a good predictive power. Among the main results we highlight PPLS (Parallel PLS), a parallel version for the case of only one output variable, and DPLS ( Direct PLS), a fast and approximate version, for the case fo more than one output variable. On the other hand, we also present some variants of the regression algorithm that can enhance the predictive quality based on a non -linear formulation. We indroduce LPLS (Lifted PLS), for the case of only one dependent variable based on the theory of kernel functions, KDPLS, a non-linear formulation for DPLS, and MKPLS, a multi-kernel algorithm that can result in a more compact model and a better prediction quality, thankas to the use of several kernels for the model bulding.
Hawes, Anthony H. "Least squares and adaptive multirate filtering /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FHawes.pdf.
Full textThesis advisor(s): Charles W. Therrien, Roberto Cristi. Includes bibliographical references (p. 45). Also available online.
Hazra, Rajeeb. "Constrained least-squares digital image restoration." W&M ScholarWorks, 1995. https://scholarworks.wm.edu/etd/1539623865.
Full textGuo, Ronggang. "Systematical analysis of the transformation procedures in Baden-Württemberg with Least Squares and Total Least Squares methods." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-33293.
Full textFurrer, Marc. "Numerical Accuracy of Least Squares Monte Carlo." St. Gallen, 2008. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01650217002/$FILE/01650217002.pdf.
Full textPetra, Stefania. "Semismooth least squares methods for complementarity problems." Doctoral thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=98174558X.
Full textPei, Sun. "Noise Resistant Least Squares Based Adaptive Control." Thesis, KTH, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92628.
Full textKong, Seunghyun. "Linear programming algorithms using least-squares method." Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-04012007-010244/.
Full textMartin Savelsbergh, Committee Member ; Joel Sokol, Committee Member ; Earl Barnes, Committee Co-Chair ; Ellis L. Johnson, Committee Chair ; Prasad Tetali, Committee Member.
Titley-Péloquin, David. "Backward pertubation analysis of least squares problems." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=94973.
Full textNous effectuons une analyse de l'erreur rétrograde des problèmes de moindres carrés. Nous analysons deux méthodes habituellement utilisées pour mesurer l'erreur rétrograde et démontrons que celles-ci sont en fait équivalentes. Nous présentons de nouvelles estimations de l'erreur rétrograde des problèmes de moindres carrés, et nous les comparons aux estimations connues. L'un des usages de ce type d'analyse consiste à établir des critères d'arrêt pour les méthodes itératives. Nous expliquons des phénomènes de convergence inattendus que nous avons observés dans les méthodes itératives de type résidu minimal. Nous démontrons ensuite que les critères d'arrêt habituellement utilisés avec ces méthodes peuvent être trop prudents dans certaines circonstances. Nous proposons donc de nouveaux critères d'arrêt plus fiables, et présentons une implémentation efficace de ceux-ci dans l'algorithme LSQR. La méthode des moindres carrés est souvent utilisée en statistique lorsque les données proviennent d'un modèle linéaire et que le bruit est distribué selon une loi normale dont l'espérance est zéro et la variance est la matrice identité proportionnée. Nous décrivons la convergence de l'erreur qui résulte de ce type de données et proposons des critères d'arrêt adaptés à cette situation. Enfin, nous appliquons une partie de cette analyse aux problèmes de moindres carrés proportionnés.
Breen, Stephen. "Integer least squares search and reduction strategies." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106561.
Full textCette thèse s'intéresse aux problèmes de moindres carrés entiers (ILS), ou les problèmes du vecteur le plus proche. Une approche souvent utilisée pour résoudre ces problèmes est la méthode de recherche discrète, qui implique deux étapes: la réduction et la recherche. Le but principal de la réduction est de rendre l'étape de recherche plus rapide. Les stratégies de réduction des problèmes ILS sous contrainte de boîte impliquent la réorganisation de colonnes de la matrice de données. Il existe actuellement deux algorithmes pour la réorganisation des colonnes, appelés ici les algorithmes SW et CH, qui sont les plus efficaces pour la phase de recherche. Bien que les deux utilisent toutes les informations disponibles dans le problème, les algorithmes SW et CH sont différents en apparence, et ont été obtenus respectivement à partir d'une point de vue géométrique et algébrique de vue. Dans cette thèse, nous modifions l'algorithme SW pour rendre son calcul plus efficace et plus facile à comprendre. Nous démontrons ensuite qu'en théorie, les algorithmes SW et CH donne effectivement la même réorganisation de colonnes. Enfin, nous proposons un nouveau algorithme mathématiquement équivalent qui est plus efficace, tout en demeurant facile à comprendre. Cette thèse étend également l'idée de permutation de colonnes aux problèmes ordinaires de moindres carrés entiers. Un nouveau algorithme de réduction qui combine le célèbre agorithme Lenstra-Lenstra-Lovász (LLL) avec la nouvelle stratégie de réorganisation de colonnes est proposé. La nouvelle réduction peut être plus efficace que la réduction LLL dans certains cas.Cette thèse examine également certains algorithmes de recherche d'usage courant. Un nouveau est proposé qui est basé sur deux algorithmes précédents: l'algorithme de parcours en profondeur et celui de la recherche au meilleur d'abord. Notre algorithme hybride détient les avantages des deux originaux, tout en étant plus efficace et plus facile à utiliser que d'autres algorithmes hybrides déjà existants.
Saadi, Kamel. "Efficient regularisation of least-squares kernel machines." Thesis, University of East Anglia, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522281.
Full textRogers, C. A. "Partial least squares (PLS) : a comparative assessment." Thesis, University of Bath, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235583.
Full textCho, Youngjae. "Least squares estimation of acoustic reflection coeffficient." Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420208.
Full textTORTURELA, ALEXANDRE DE MACEDO. "NOVEL SPARSE SYSTEMS LEAST SQUARES ESTIMATION METHODS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26712@1.
Full textINSTITUTO MILITAR DE ENGENHARIA
CENTRO TECNOLÓGICO DO EXÉRCITO
INSTITUTO DE PESQUISA E DESENVOLVIMENTO
Neste trabalho, quatro métodos projetados especificamente para a estimação de sistemas esparsos são originalmente elaborados e apresentados. São eles: Encolhimentos Sucessivos, Expansões Sucessivas, Minimização da Norma l1 e Ajuste Automático do fator de regularização do Custo LS. Os quatro métodos propostos baseiam-se na técnica de estimação de sistemas lineares e invariantes no tempo pelo critério dos mínimos quadrados, universalmente conhecida por sua denominação em inglês - Least Squares (LS) Estimation, e incorporam técnicas relacionadas a otimização convexa e à teoria de compressive sensing. Os resultados obtidos em simulações mostram que os métodos em questão têm desempenho superior que a estimação LS convencional e que o algoritmo Recursive Least Squares (RLS) com regularização convexa denominado l1-RLS, em muitos casos alcançando o desempenho ótimo apresentado pelo método de estimação LS Oráculo, no qual o suporte da resposta ao impulso em tempo discreto do sistema estimado é conhecido a priori. Além disso, os métodos propostos apresentam custo computacional menor que do algoritmo l1-RLS.
In this thesis, four methods specifically designed for sparse systems estimation are originally developed and presented, which were called here: Relaxations method, Successive Expansions method, l1-norm Minimization method and Automatic Adjustment of the Regularization Factor method. The four proposed methods are based on the Least Squares (LS) Estimation method and incorporate techniques related to convex optimization and to the theory of compressive sensing. The simulation results show that the proposed methods herein present superior performance than the ordinary LS estimation method and the Recursive Least Squares (RLS) with convex regularization method (l1-RLS), in many cases achieving the same optimal performance presented by the LS Oracle method. Furthermore, the proposed methods demand lower computational cost than the l1-RLS method.
Bian, Xiaomeng. "Completely Recursive Least Squares and Its Applications." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1518.
Full textSchwab, Devin. "Hierarchical Sampling for Least-Squares Policy Iteration." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1441374844.
Full textChatkupt, Chlump. "Least-squares regret and partially strategic players." Thesis, London School of Economics and Political Science (University of London), 2015. http://etheses.lse.ac.uk/3220/.
Full textRosopa, Patrick. "A COMPARISON OF ORDINARY LEAST SQUARES, WEIGHTED LEAST SQUARES, AND OTHER PROCEDURES WHEN TESTING FOR THE EQUALITY OF REGRESSION." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2311.
Full textPh.D.
Department of Psychology
Sciences
Psychology
Holmes, Marion R. "Least squares approximation by G¹ piecewise parametric cubics /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277978.
Full textHuang, Xuejun, and Xuewen Huang. "The Least-Squares Method for American Option Pricing." Thesis, Uppsala University, Department of Mathematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-119754.
Full textMunoz, Maldonado Yolanda. "Mixed models, posterior means and penalized least squares." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2637.
Full textBotting, Brad. "Structured Total Least Squares for Approximate Polynomial Operations." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1035.
Full text- Division
- Greatest Common Divisor (GCD)
- Bivariate Factorization
- Decomposition
Rossi, Michel. "Iterative least squares algorithms for digital filter design." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/10099.
Full textSkoglund, Ingegerd. "Algorithms for a Partially Regularized Least Squares Problem." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8784.
Full textAbu, Safia Ahmed. "Phylogenetic inference by generalized least squares : computational aspects." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97881.
Full textLannsjö, Fredrik. "Forecasting the Business Cycle using Partial Least Squares." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-151378.
Full textPartial Least Squares är både en regressionsmetod och ett verktyg för variabelselektion som är specielltlämpligt för modeller baserade på en stor mängd (möjligtvis korrelerade) variabler.Medan det är en väletablerad modelleringsmetod inom kemimetri, anpassar den häruppsatsen PLS till finansiell data för att förutspå rörelserna av konjunkturen,representerad av OECD's Composite Leading Indicator. Högdimensionella dataanvänds och en model med automatiserad variabelselektion via en genetiskalgoritm utvecklas för att göra en prognos av olika ekonomiska regioner medgoda resultat i out-of-sample-tester
Holmes, Marion R. "Least squares approximation by G1 piecewise parametric cubics." Thesis, Monterey, California. Naval Postgraduate School, 1993. http://hdl.handle.net/10945/39690.
Full textParametric piecewise cubic polynomials are used throughout the computer graphics industry to represent geometric curved shapes. The exploration of the use of parametric curves and surfaces can be viewed as the birth of Computer Aided Geometric Design (CAG
Gomez, Steven A. "Parallel multigrid for large-scale least squares sensitivity." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82481.
Full textThis electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from department-submitted PDF version of thesis
Includes bibliographical references (p. 85-86).
This thesis presents two approaches for efficiently computing the "climate" (long- time average) sensitivities for dynamical systems. Computing these sensitivities is essential to performing engineering analysis and design. The first technique is a novel approach to solving the "climate" sensitivity problem for periodic systems. A small change to the traditional adjoint sensitivity equations results in a method which can accurately compute both instantaneous and long-time averaged sensitivities. The second approach deals with the recently developed Least Squares Sensitivity (LSS) method. A multigrid algorithm is developed that can, in parallel, solve the discrete LSS system. This generic algorithm can be applied to ordinary differential equations such as the Lorenz System. Additionally, this parallel method enables the estimation of climate sensitivities for a homogeneous isotropic turbulence model, the largest scale LSS computation performed to date.
by Steven A. Gomez.
S.M.
Moller, Jurgen Johann. "The implementation of noise addition partial least squares." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/3362.
Full textWhen determining the chemical composition of a specimen, traditional laboratory techniques are often both expensive and time consuming. It is therefore preferable to employ more cost effective spectroscopic techniques such as near infrared (NIR). Traditionally, the calibration problem has been solved by means of multiple linear regression to specify the model between X and Y. Traditional regression techniques, however, quickly fail when using spectroscopic data, as the number of wavelengths can easily be several hundred, often exceeding the number of chemical samples. This scenario, together with the high level of collinearity between wavelengths, will necessarily lead to singularity problems when calculating the regression coefficients. Ways of dealing with the collinearity problem include principal component regression (PCR), ridge regression (RR) and PLS regression. Both PCR and RR require a significant amount of computation when the number of variables is large. PLS overcomes the collinearity problem in a similar way as PCR, by modelling both the chemical and spectral data as functions of common latent variables. The quality of the employed reference method greatly impacts the coefficients of the regression model and therefore, the quality of its predictions. With both X and Y subject to random error, the quality the predictions of Y will be reduced with an increase in the level of noise. Previously conducted research focussed mainly on the effects of noise in X. This paper focuses on a method proposed by Dardenne and Fernández Pierna, called Noise Addition Partial Least Squares (NAPLS) that attempts to deal with the problem of poor reference values. Some aspects of the theory behind PCR, PLS and model selection is discussed. This is then followed by a discussion of the NAPLS algorithm. Both PLS and NAPLS are implemented on various datasets that arise in practice, in order to determine cases where NAPLS will be beneficial over conventional PLS. For each dataset, specific attention is given to the analysis of outliers, influential values and the linearity between X and Y, using graphical techniques. Lastly, the performance of the NAPLS algorithm is evaluated for various
Kumar, Rajendra. "FAST FREQUENCY ACQUISITION VIA ADAPTIVE LEAST SQUARES ALGORITHM." International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615276.
Full textA new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general adaptive parameter estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be non-gaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a-priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the Fast Fourier Transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Woodard, Joseph Walker. "The Linear Least Squares Problem of Bundle Adjustment." UNF Digital Commons, 1990. http://digitalcommons.unf.edu/etd/227.
Full textOyedele, Opeoluwa Funmilayo. "The construction of a partial least squares biplot." Doctoral thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/12948.
Full textIn multivariate analysis, data matrices are often very large, which sometimes makes it difficult to describe their structure and to make a visual inspection of the relationship between their respective rows (samples) and columns (variables). For this reason, biplots, the joint graphical display of the rows and columns of a data matrix, can be useful tools for analysis. Since they were first introduced, biplots have been employed in a number of multivariate methods, such as Correspondence Analysis (CA), Principal Component Analysis (PCA), Canonical Variate Analysis (CVA) and Discriminant Analysis (DA), as a form of graphical display of data. Another possible employment is in Partial Least Squares (PLS). First introduced as a regression method, PLS is more flexible than multivariate regression, but better suited than Principal Component Regression (PCR) for the prediction of a set of response variables from a large set of predictor variables. Employing the biplot in PLS gave rise to the PLS biplot, a new addition to the biplot family. In the current study, this biplot was successfully applied to the sensory data to investigate the relationships between the sensory panel characteristics and the chemical quality measurements of sixteen olive oils. It was also applied to a large set of mineral sorting production data to investigate the relationships between the output variables and the process factors used to produce a final product. Furthermore, the PLS biplot was applied to a Binomialdistributed data concerning the diabetes testing of Indian women and to a Poisson-distributed data showing the diversity of arboreal marsupials (possum) in the Montane ash forest. After these applications, it is proposed that the PLS biplot is a useful graphical tool for displaying results from the (univariate) Partial Least Squares-Generalized Linear Model (PLS-GLM) analysis of a data set. With Partial Least Squares Regression (PLSR) being a valuable method for modelling high-dimensional data, especially in chemometrics, the PLS biplot was successfully applied to a cereal evaluation containing one hundred and forty five infrared spectra and six chemical properties, and a gene expression data with two thousand genes.
Wang, Zhen. "Semi-parametric Bayesian Models Extending Weighted Least Squares." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1236786934.
Full text