Dissertations / Theses on the topic 'Least squares algorithm'

To see the other types of publications on this topic, follow the link: Least squares algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Least squares algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guo, Hengdao. "Frequency Tracking and Phasor Estimation Using Least Squares and Total Least Squares Algorithms." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/57.

Full text
Abstract:
System stability plays an important role in electric power systems. With the development of electric power system, the scale of the electric grid is now becoming larger and larger, and many renewable energy resources are integrated in the grid. However, at the same time, the stability and safety issues of electric power system are becoming more complicated. Frequency and phasors are two critical parameters of the system stability. Obtaining these two parameters have been great challenges for decades. Researchers have provided various kinds of algorithms for frequency tracking and phasor estimation. Among them, Least Squares (LS) algorithm is one of the most commonly used algorithm. This thesis studies the LS algorithm and the Total Least Squares (TLS) algorithm working on frequency tracking and phasor estimation. In order to test the performance of the two algorithms, some simulations have been made in the Matlab. The Total Vector Error (TVE) is a commonly used performance criteria, and the TVE results of the two algorithms are compared. The TLS algorithm performs better than LS algorithm when the frequencies of all harmonic components are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Rajendra. "FAST FREQUENCY ACQUISITION VIA ADAPTIVE LEAST SQUARES ALGORITHM." International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615276.

Full text
Abstract:
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general adaptive parameter estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be non-gaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a-priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the Fast Fourier Transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
APA, Harvard, Vancouver, ISO, and other styles
3

Tsakiris, Manolis. "On the regularization of the recursive least squares algorithm." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-21102010-101424/.

Full text
Abstract:
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization.
Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.
APA, Harvard, Vancouver, ISO, and other styles
4

Degtyarena, Anna Semenovna. "The window least mean square error algorithm." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2385.

Full text
Abstract:
In order to improve the performance of LMS (least mean square) algorithm by decreasing the amount of calculations this research proposes to make an update on each step only for those elements from the input data set, that fall within a small window W near the separating hyperplane surface. This work aims to describe in detail the results that can be achieved by using the proposed LMS with window learning algorithm in information systems that employ the methodology of neural network for the purposes of classification.
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Rajendra. "Differential Sampling for Fast Frequency Acquisition Via Adaptive Extended Least Squares Algorithm." International Foundation for Telemetering, 1987. http://hdl.handle.net/10150/615321.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California
This paper presents a differential signal model along with appropriate sampling techniques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in measurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can be made small.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Dongmei. "Least mean square algorithm implementation using the texas instrument digital signal processing board." Ohio University / OhioLINK, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1175279376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thanawalla, Rutang Kirit. "Valuation of gas swing options using an extended least squares Monte Carlo algorithm." Thesis, Heriot-Watt University, 2006. http://hdl.handle.net/10399/144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wood, John D. "MIMO recursive least squares control algorithm for the AN/FPN-44A Loran-C transmitter." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA274820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Manmek, Thip Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Real-time power system disturbance identification and its mitigation using an enhanced least squares algorithm." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/26233.

Full text
Abstract:
This thesis proposes, analyses and implements a fast and accurate real-time power system disturbances identification method based on an enhanced linear least squares algorithm for mitigation and monitoring of various power quality problems such as current harmonics, grid unbalances and voltage dips. The enhanced algorithm imposes less real-time computational burden on processing the system and is thus called ???efficient least squares algorithm???. The proposed efficient least squares algorithm does not require matrix inversion operation and contains only real numbers. The number of required real-time matrix multiplications is also reduced in the proposed method by pre-performing some of the matrix multiplications to form a constant matrix. The proposed efficient least squares algorithm extracts instantaneous sine and cosine terms of the fundamental and harmonic components by simply multiplying a set of sampled input data by the pre-calculated constant matrix. A power signal processing system based on the proposed efficient least squares algorithm is presented in this thesis. This power signal processing system derives various power system quantities that are used for real-time monitoring and disturbance mitigation. These power system quantities include constituent components, symmetrical components and various power measurements. The properties of the proposed power signal processing system was studied using modelling and practical implementation in a digital signal processor. These studies demonstrated that the proposed method is capable of extracting time varying power system quantities quickly and accurately. The dynamic response time of the proposed method was less than half that of a fundamental cycle. Moreover, the proposed method showed less sensitivity to noise pollution and small variations in fundamental frequency. The performance of the proposed power signal processing system was compared to that of the popular DFT/FFT methods using computer simulations. The simulation results confirmed the superior performance of the proposed method under both transient and steady-state conditions. In order to investigate the practicability of the method, the proposed power signal processing system was applied to two real-life disturbance mitigation applications namely, an active power filter (APF) and a distribution synchronous static compensator (D-STATCOM). The validity and performance of the proposed signal processing system in both disturbance mitigations applications were investigated by simulation and experimental studies. The extensive modelling and experimental studies confirmed that the proposed signal processing system can be used for practical real-time applications which require fast disturbance identification such as mitigation control and power quality monitoring of power systems
APA, Harvard, Vancouver, ISO, and other styles
10

Shapero, Samuel Andre. "Configurable analog hardware for neuromorphic Bayesian inference and least-squares solutions." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/51719.

Full text
Abstract:
Sparse approximation is a Bayesian inference program with a wide number of signal processing applications, such as Compressed Sensing recovery used in medical imaging. Previous sparse coding implementations relied on digital algorithms whose power consumption and performance scale poorly with problem size, rendering them unsuitable for portable applications, and a bottleneck in high speed applications. A novel analog architecture, implementing the Locally Competitive Algorithm (LCA), was designed and programmed onto a Field Programmable Analog Arrays (FPAAs), using floating gate transistors to set the analog parameters. A network of 6 coefficients was demonstrated to converge to similar values as a digital sparse approximation algorithm, but with better power and performance scaling. A rate encoded spiking algorithm was then developed, which was shown to converge to similar values as the LCA. A second novel architecture was designed and programmed on an FPAA implementing the spiking version of the LCA with integrate and fire neurons. A network of 18 neurons converged on similar values as a digital sparse approximation algorithm, with even better performance and power efficiency than the non-spiking network. Novel algorithms were created to increase floating gate programming speed by more than two orders of magnitude, and reduce programming error from device mismatch. A new FPAA chip was designed and tested which allowed for rapid interfacing and additional improvements in accuracy. Finally, a neuromorphic chip was designed, containing 400 integrate and fire neurons, and capable of converging on a sparse approximation solution in 10 microseconds, over 1000 times faster than the best digital solution.
APA, Harvard, Vancouver, ISO, and other styles
11

Wanchana, Suchada. "Quantitative structure/property relationship modeling of pharmacokinetic properties using genetic algorithm-combined partial least squares method." 京都大学 (Kyoto University), 2003. http://hdl.handle.net/2433/148610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Green, Christopher Lee. "IP Algorithm Applied to Proteomics Data." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd618.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Teahyung. "Algorithm-Based Efficient Approaches for Motion Estimation Systems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19783.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Anderson, David; Committee Member: Gray, Alexander; Committee Member: Ma, Xiaoli; Committee Member: Mersereau , Russell; Committee Member: Wills, Donald.
APA, Harvard, Vancouver, ISO, and other styles
14

Peng, Song, Chen Xiao-yu, and Qi Jian-zhong. "Analysis of a Combined GLONASS/Compass-I Navigation Algorithm." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595792.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Compass-I system is China has built satellite navigation system. It's a kind of regional position system according to the double-star position principle. Commonly, Compass-I system need adopt active position, in the paper several passive position methods are put forward. A combination navigation mode based on GLONASS and Compass-I passive navigation is proposed in this paper. The differences of coordinates and time systems between those two navigation systems are analyzed. User position is calculated by least squares method. Combination Navigation Algorithm can improve visible satellite constellation structure and positioning precision so as to ensure the reliability and continuity of positioning result.
APA, Harvard, Vancouver, ISO, and other styles
15

Madkour, A. A. M., M. Alamgir Hossain, Keshav P. Dahal, and H. Yu. "Real-time system identification using intelligent algorithms." IEEE, 2004. http://hdl.handle.net/10454/2471.

Full text
Abstract:
This research presents an investigation into the development of real time system identification using intelligent algorithms. A simulation platform of a flexible beam vibration using finite difference (FD) method is used to demonstrate the real time capabilities of the identification algorithms. A number of approaches and algorithms for on line system identifications are explored and evaluated to demonstrate the merits of the algorithms for real time implementation. These approaches include identification using (a) traditional recursive least square (RLS) filter, (b) Genetic Algorithms (GAs) and (c) adaptive Neuro_Fuzzy (ANFIS) model. The above algorithms are used to estimate a linear discrete second order model for the flexible beam vibration. The model is implemented, tested and validated to evaluate and demonstrate the merits of the algorithms for real time system identification. Finally, a comparative performance of error convergence and real time computational complexity of the algorithms is presented and discussed through a set of experiments.
APA, Harvard, Vancouver, ISO, and other styles
16

Thomas, Joshua Bryan. "Using the Coherence Function as a Means to Improve Frequency Domain Least Squares System Identification." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1166476245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zeileis, Achim, and Christian Kleiber. "Approximate replication of high-breakdown robust regression techniques." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/422/1/document.pdf.

Full text
Abstract:
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
18

McClatchey, Nathaniel J. "Tensors: An Adaptive Approximation Algorithm, Convergence in Direction, and Connectedness Properties." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1520508234977924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yapici, Yavuz. "A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613220/index.pdf.

Full text
Abstract:
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
APA, Harvard, Vancouver, ISO, and other styles
20

Holec, Matúš. "Metóda najemnších štvorcov genetickým algoritmom." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114285.

Full text
Abstract:
This thesis describes the design and implementation of genetic algorithm for approximation of non-linear mathematical functions using the least squares method. One objective of this work is to theoretically describe the basics of genetic algorithms. The second objective is to create a program that would be potentially used to approximate empirically measured data by the scientific institutions. Besides the theoretical description of the given subject, the text part of the work mainly deals with the design of the genetic algorithm and the whole application solving the given problem. Specific part of the assignment is that the developed application has to support approximation of points by various mathematical non-linear functions in several different intervals, and then it has to insure, that resulting functions are continuous throughout all the intervals. Described functionality is not offered by any available software.
APA, Harvard, Vancouver, ISO, and other styles
21

Nusrat, Nazia. "Development of novel electrical power distribution system state estimation and meter placement algorithms suitable for parallel processing." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/10902.

Full text
Abstract:
The increasing penetration of distributed generation, responsive loads and emerging smart metering technologies will continue the transformation of distribution systems from passive to active network conditions. In such active networks, State Estimation (SE) tools will be essential in order to enable extensive monitoring and enhanced control technologies. In future distribution management systems, the novel electrical power distribution system SE requires development in a scalable manner in order to accommodate small to massive size networks, be operable with limited real time measurements and a restricted time frame. Furthermore, a significant phase of new sensor deployment is inevitable to enable distribution system SE, since present-day distribution networks lack the required level of measurement and instrumentation. In the above context, the research presented in this thesis investigates five SE optimization solution methods with various case studies related to expected scenarios of future distribution networks to determine their suitability. Hachtel's Augmented Matrix method is proposed and developed as potential SE optimizer for distribution systems due to its potential performance characteristics with regard to accuracy and convergence. Differential Evolution Algorithm (DEA) and Overlapping Zone Approach (OZA) are investigated to achieve scalability of SE tools; followed by which the network division based OZA is proposed and developed. An OZA requiring additional measurements is also proposed to provide a feasible solution for voltage estimation at a reduced computation cost. Realising the requirement of additional measurements deployment to enable distribution system SE, the development of a novel meter placement algorithm that provides economical and feasible solutions is demonstrated. The algorithm is strongly focused on reducing the voltage estimation errors and is capable of reducing the error below desired threshold with limited measurements. The scalable SE solution and meter placement algorithm are applied on a multi-processor system in order to examine effective reduction of computation time. Significant improvement in computation time is observed in both cases by dividing the problem into smaller segments. However, it is important to note that enhanced network division reduces computation time further at the cost of accuracy of estimation. Different networks including both idealised (16, 77, 356 and 711 node UKGDS) and real (40 and 43 node EG) distribution network data are used as appropriate to the requirement of the applications throughout this thesis.
APA, Harvard, Vancouver, ISO, and other styles
22

Cakmak, Mehtap. "Development Of A Multigrid Accelerated Euler Solver On Adaptively Refined Two- And Three-dimensional Cartesian Grids." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12610753/index.pdf.

Full text
Abstract:
Cartesian grids offer a valuable option to simulate aerodynamic flows around complex geometries such as multi-element airfoils, aircrafts, and rockets. Therefore, an adaptively-refined Cartesian grid generator and Euler solver are developed. For the mesh generation part of the algorithm, dynamic data structures are used to determine connectivity information between cells and uniform mesh is created in the domain. Marching squares and cubes algorithms are used to form interfaces of cut and split cells. Geometry-based cell adaptation is applied in the mesh generation. After obtaining appropriate mesh around input geometry, the solution is obtained using either flux vector splitting method or Roe&rsquo
s approximate Riemann solver with cell-centered approach. Least squares reconstruction of flow variables within the cell is used to determine high gradient regions of flow. Solution based adaptation method is then applied to current mesh in order to refine these regions and also coarsened regions where unnecessary small cells exist. Multistage time stepping is used with local time steps to increase the convergence rate. Also FAS multigrid technique is used in order to increase the convergence rate. It is obvious that implementation of geometry and solution based adaptations are easier for Cartesian meshes than other types of meshes. Besides, presented numerical results show the accuracy and efficiency of the algorithm by especially using geometry and solution based adaptation. Finally, Euler solutions of Cartesian grids around airfoils, projectiles and wings are compared with the experimental and numerical data available in the literature and accuracy and efficiency of the solver are verified.
APA, Harvard, Vancouver, ISO, and other styles
23

Wan, Chunru. "Systolic algorithms and applications." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/10479.

Full text
Abstract:
The computer performance has been improved tremendously since the development of the first allpurpose, all electronic digital computer in 1946. However, engineers, scientists and researchers keep making more efforts to further improve the computer performance to meet the demanding requirements for many applications. There are basically two ways to improve the computer performance in terms of computational speed. One way is to use faster devices (VLSI chips). Although faster and faster VLSI components have contributed a great deal on the improvement of computation speed, the breakthroughs in increasing switching speed and circuit densities of VLSI devices will be diflicult and costly in future. The other way is to use parallel processing architectures which employ multiple processors to perform a computation task. When multiple processors working together, an appropriate architecture is very important to achieve the maximum performance in a cost-effective manner. Systolic arrays are ideally qualified for computationally intensive applications with inherent massive parallelism because they capitalize on regular, modular, rhythmic, synchronous, concurrent processes that require intensive, repetitive computation. This thesis can be divided into three parts. The first part is an introductory part containing Chap. I and Chap. 2. The second part, composed of Chap. 3 and Chap. 4 concerns with the systolic design methodology. The third part deals with the several systolic array design for different applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Puikkonen, Panu Tapani. "Development of an Adaptive Equalization Algorithm Using Acoustic Energy Density." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1686.

Full text
Abstract:
Sound pressure equalization of audio signals using digital signal processors has been a subject of ongoing study for many years. The traditional approach is to equalize sound at a point in a listening environment, but because of its specific dependence on the room frequency response between a source and receiver position, this equalization generally causes the spectral response to worsen significantly at other locations in the room. This work presents both a time-invariant and a time-varying implementation of an adaptive acoustic energy density equalization filter for a one-dimensional sound field. Energy density equalization addresses the aforementioned challenge and others that relate to sound equalization. The theory and real-time implementation of time-invariant sound pressure and energy density equalizers designed using the least-squares method are presented, and their performances are compared. An implementation of a time-varying energy density equalizer is also presented. Time-invariant equalization results based on real-time measurements in a plane-wave tube are presented. A sound pressure equalizer results in a nearly flat spectral magnitude at the point of equalization. However, it causes the frequencies corresponding to spatial nulls at that point to be undesirably boosted elsewhere in the sound field, where those nulls do not exist at the same frequencies. An energy density equalization filter identifies and compensates for all resonances and other global spectral effects of the tube and loudspeaker. It does not attempt to equalize the spatially varying frequency nulls caused by local pressure nodes at the point of equalization. An implementation of a time-varying energy density equalizer is also presented. This method uses the filtered-x filter update to adjust the filter coefficients in real-time to adapt to changes in the sound field. Convergence of the filter over time is demonstrated as the closed end of the tube is opened, then closed once again. Thus, the research results demonstrate that an acoustic energy density filter can be used to time-adaptively equalize global spectral anomalies of a loudspeaker and a one-dimensional sound field.
APA, Harvard, Vancouver, ISO, and other styles
25

Allen, Matthew Scott. "Global and Multi-Input-Multi-Output (MIMO) Extensions of the Algorithm of Mode Isolation (AMI)." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6877.

Full text
Abstract:
A wide range of dynamic systems can be approximated as linear and time invariant, for which a wealth of tools are available to characterize or modify their dynamic characteristics. Experimental modal analysis (EMA) is a procedure whereby the natural frequencies, damping ratios and mode shapes which parameterize vibratory, linear, time invariant systems are derived from experimentally measured response data. EMA is commonly applied in a multitude of applications, for example, to generate experimental models of dynamic systems, validate finite element models and to characterize dissipation in vibratory systems. Recent EMA has also been used to characterize damage or defects in a variety of systems. The Algorithm of Mode Isolation (AMI), presented by Drexel and Ginsberg in 2001, employs a unique strategy for modal parameter estimation in which modes are sequentially identified and subtracted from a set of FRFs. Their natural frequencies, damping ratios and mode vectors are then refined through an iterative procedure. This contrasts conventional multi-degree-of-freedom (MDOF) identification algorithms, most of which attempt to identify all of the modes of a system simultaneously. This dissertation presents a hybrid multi-input-multi-output (MIMO) implementation of the algorithm of mode isolation that improves the performance of AMI for systems with very close or weakly excited modes. The algorithmic steps are amenable to semi-automatic identification, and many FRFs can be processed efficiently and without concern for ill-conditioning, even when many modes are identified. The performance of the algorithm is demonstrated on noise contaminated analytical response data from two systems having close modes, one of which has localized modes while the other has globally responsive modes. The results are compared with other popular algorithms. MIMO-AMI is also applied to experimentally obtained data from shaker excited tests of the Z24 highway bridge, demonstrating the algorithm's performance on a data set typical of many EMA applications. Considerations for determining the number of modes active in the frequency band of interest are addressed, and the results obtained are compared to those found by other groups of researchers.
APA, Harvard, Vancouver, ISO, and other styles
26

Siqueira, Júnior Erinaldo Leite. "Uso de algoritmo genético no ajuste linear através de dados experimentais." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/8028.

Full text
Abstract:
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-03-22T11:33:37Z No. of bitstreams: 1 arquivototal.pdf: 1643585 bytes, checksum: 5ba2336704d1de91b41bbe323ef3781e (MD5)
Made available in DSpace on 2016-03-22T11:33:37Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1643585 bytes, checksum: 5ba2336704d1de91b41bbe323ef3781e (MD5) Previous issue date: 2015-05-15
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
In this paper we discuss the problem of linear tting to experimental data using a method bio-inspired of optimization, i.e., it imitates the biological concepts attempt to nd optimal or suboptimal results. The method used is the genetic algorithm (GA), AG makes use of the theory of Darwinian evolution to nd the best route for the desired maximum point. Traditionally, the linear tting is made through the method of least squares. The method is e cient, but is di cult to justify the pre-calculus classes. Therefore, the alternative AG comes as a computationally exhaustive procedure, however easy justi cation for these classes. Thus, the purpose of this study is to compare the results of linear tting for some control scenarios using this methods and certify the quality of the adjustments obtained by the approximate method. At the end of the work it was found that the results are solid enough to justify the alternative method and the proposed use of this optimization process has the potential to spark interest in other areas of mathematics.
Neste trabalho abordaremos o problema de ajuste linear para dados experimentais através de um método de otimização bio-inspirado, isto é, que mimetiza conceitos biológicos na tentativa de buscar resultados ótimos ou sub-ótimos. O método utilizado é o algoritmo genético (AG), AG faz uso da teoria da evolução Darwiniana para buscar a melhor rota para o ponto de máximo desejado. Tradicionalmente, o ajuste linear é feito através do método de mínimos quadrados. Tal método é e ciente, porém é de difícil justi cativa para as turmas pré-cálculo. Diante disso, a alternativa do AG vem como um procedimento exaustivo computacionalmente, entretanto de fácil justi cativa para essas turmas. Assim, a proposta do trabalho é comparar os resultados de ajuste linear para alguns cenários de controle através dos dois métodos e certi car a qualidade dos ajustes obtidos pelo método aproximado. No nal do trabalho constatou-se que os resultados encontrados sÿo sólidos o bastante para justi car o método alternativo e que a proposta da utilização desse processo de otimização tem potencial para despertar interesse em outras áreas da matemática.
APA, Harvard, Vancouver, ISO, and other styles
27

RENTERIA, RAUL PIERRE. "ALGORITHMS FOR PARTIAL LEAST SQUARES REGRESSION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4362@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Muitos problemas da área de aprendizagem automática tem por objetivo modelar a complexa relação existente num sisitema , entre variáveis de entrada X e de saída Y na ausência de um modelo teórico. A regressão por mínimos quadrados parciais PLS ( Partial Least Squares) constitui um método linear para resolução deste tipo de problema , voltado para o caso de um grande número de variáveis de entrada quando comparado com número de amostras. Nesta tese , apresentamos uma variante do algoritmo clássico PLS para o tratamento de grandes conjuntos de dados , mantendo um bom poder preditivo. Dentre os principais resultados destacamos um versão paralela PPLS (Parallel PLS ) exata para o caso de apenas um variável de saída e um versão rápida e aproximada DPLS (DIRECT PLS) para o caso de mais de uma variável de saída. Por outro lado ,apresentamos também variantes para o aumento da qualidade de predição graças à formulação não linear. São elas o LPLS ( Lifted PLS ), algoritmo para o caso de apenas uma variável de saída, baseado na teoria de funções de núcleo ( kernel functions ), uma formulação kernel para o DPLS e um algoritmo multi-kernel MKPLS capaz de uma modelagemmais compacta e maior poder preditivo, graças ao uso de vários núcleos na geração do modelo.
The purpose of many problems in the machine learning field isto model the complex relationship in a system between the input X and output Y variables when no theoretical model is available. The Partial Least Squares (PLS)is one linear method for this kind of problem, for the case of many input variables when compared to the number of samples. In this thesis we present versions of the classical PLS algorithm designed for large data sets while keeping a good predictive power. Among the main results we highlight PPLS (Parallel PLS), a parallel version for the case of only one output variable, and DPLS ( Direct PLS), a fast and approximate version, for the case fo more than one output variable. On the other hand, we also present some variants of the regression algorithm that can enhance the predictive quality based on a non -linear formulation. We indroduce LPLS (Lifted PLS), for the case of only one dependent variable based on the theory of kernel functions, KDPLS, a non-linear formulation for DPLS, and MKPLS, a multi-kernel algorithm that can result in a more compact model and a better prediction quality, thankas to the use of several kernels for the model bulding.
APA, Harvard, Vancouver, ISO, and other styles
28

Fraley, Christina. "Solution of nonlinear least-squares problems /." Stanford, CA : Dept. of Computer Science, Stanford University, 1987. http://doi.library.cmu.edu/10.1184/OCLC/19613955.

Full text
Abstract:
Thesis (Ph. D.)--Stanford University, 1987.
"June 1987." This research was supported in part by Joseph Oliger under Office of Naval Research contract N00014-82-K-0335, by Stanford Linear Accelerator Center and the Systems Optimization Laboratory under Army Research Office contract DAAG29-84-K-0156. Includes bibliographies.
APA, Harvard, Vancouver, ISO, and other styles
29

Kong, Seunghyun. "Linear programming algorithms using least-squares method." Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-04012007-010244/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2007.
Martin Savelsbergh, Committee Member ; Joel Sokol, Committee Member ; Earl Barnes, Committee Co-Chair ; Ellis L. Johnson, Committee Chair ; Prasad Tetali, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
30

Nyberg, Tobias. "Torque-Based Load Estimation for Passenger Vehicles." Thesis, Linköpings universitet, Reglerteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179208.

Full text
Abstract:
An accurate estimate of the mass of a passenger vehicle is important for several safety systems and environmental aspects. In this thesis, an algorithm for estimating the mass of a passenger vehicle using the recursive least squares methodis presented. The algorithm is based on a physical model of the vehicle and is designed to be able to run in real-time onboard a vehicle and uses the wheel torque signal calculated in the electrical control unit in the engine. Therefore no estimation of the powertrain is needed. This is one contribution that distinguishes this thesis from previous work on the same topic, which has used the engine torque. The benefit of this is that no estimation of the dynamics in the powertrain is needed. The drawback of using this method is that the algorithm is dependenton the accuracy of the estimation done in the engine electrical control unit. Two different versions of the recursive least squares method (RLS) have been developed - one with a single forgetting factor and one with two forgetting factors. The estimation performance of the two versions are compared on several different real-world driving scenarios, which include driving on country roads, highways, and city roads, and different loads in the vehicle. The algorithm with a single forgetting factor estimates the mass with an average error for all tests of 4.42% and the algorithm with multiple forgetting factors estimates the mass with an average error of 4.15 %, which is in line with state-of-the-art algorithms that are presented in other studies. In a sensitivity analysis, it is shown that the algorithms are robust to changes in the drag coefficient. The single forgetting factor algorithm is robust to changes in the rolling resistance coefficient whereas the multiple forgetting factor algorithm needs the rolling resistance coefficient to be estimated with fairly good accuracy. Both versions of the algorithm need to know the wheel radius with an accuracy of 90 %. The results show that the algorithms estimate the mass accurately for all three different driving scenarios and estimate highway roads best with an average error of 2.83 % and 2.69 % for the single forgetting factor algorithm and the multiple forgetting factor algorithm, respectively. The results indicate it is possible to use either algorithm in a real-world scenario, where the choice of which algorithm depends on sought-after robustness.
APA, Harvard, Vancouver, ISO, and other styles
31

Fernandes, Carlos Alexandre Rolim. "EqualizaÃÃo adaptativa e autodidata de canais lineares e nÃo-lineares utilizando o algoritmo do mÃdulo constante." Universidade Federal do CearÃ, 2005. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2041.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
Este trabalho trata da proposiÃÃo de algoritmos para equalizaÃÃo cega de canais lineares e nÃao-lineares inspirados no Algoritmo do MÃdulo Constante (CMA). O CMA funciona de maneira bastante eficiente com constelaÃÃes nas quais todos os pontos possuem a mesma amplitude, como em modulaÃÃes do tipo Phase Shift Keying (PSK). Entretanto, quando os pontos da constelaÃÃo podem assumir diferentes valores de amplitudes, como em modulaÃÃes do tipo Quadrature Amplitude Modulation (QAM), o CMA e seus derivados muitas vezes nÃo funcionam de forma satisfatÃria. Desta forma, as tÃcnicas aqui propostas sÃo projetadas para melhorar a performance do CMA em termos de velocidade de convergÃncia e precisÃo, quando operando em sinais transmitidos com diversos mÃdulos, em particular para a modulaÃÃo QAM. Assim como o CMA, para possuir um bom apelo prÃtico, essas tÃcnicas devem apresentar bom compromisso entre complexidade, robustez e desempenho. Para tanto, as tÃcnicas propostas utilizam o Ãltimo sÃmbolo decidido para definir uma estimaÃÃo de raio de referÃncia para a saÃda do equalizador. De fato, esses algoritmos podem ser vistos como generalizaÃÃes do CMA e de alguns derivados do CMA para constelaÃÃes com mÃltiplos raios. A proposiÃÃo de algoritmos do tipo gradiente estocÃstico à concluÃda com o desenvolvimento de tÃcnicas originais, baseadas no CMA, para equalizaÃÃo de canais do tipo Wiener, que consiste em um filtro linear com memÃria, seguido por um filtro nÃo-linear sem memÃria. As expressÃes para a adaptaÃÃo do equalizador sÃo encontradas com o auxÃlio de uma notaÃÃo unificada para trÃs diferentes estruturas: i) um filtro de Hammerstein; ii) um filtro de Volterra diagonal; e iii) um filtro de Volterra completo. Um estudo teÃrico acerca do comportamento do principal algoritmo proposto, o Decision Directed Modulus Algorithm (DDMA) à realizado. SÃo analisadas a convergÃncia e a estabilidade do algoritmo atravÃs de uma anÃlise dos pontos de mÃnimo de sua funÃÃo custo. Outro objetivo à encontrar o valor teÃrico do Erro MÃdio QuadrÃtico MÃdio em Excesso - Excess Mean Square Error (EMSE) fornecido pelo DDMA considerando-se o caso sem ruÃdo. Ao final, à feito um estudo em que se constata que o algoritmo DDMA possui fortes ligaÃÃes com a soluÃÃo de Wiener e com o CMA. VersÃes normalizadas, bem como versÃes do tipo Recursive Least Squares (RLS), dos algoritmos do tipo gradiente estocÃstico estudados sÃo tambÃm desenvolvidas. Cada famÃlia de algoritmos estudada fie composta por quatro algoritmos com algumas propriedades interessantes e vantagens sobre as tÃcnicas clÃssicas, especialmente quando operando em sinais QAM de ordem elevada. TambÃm sÃo desenvolvidas versÃes normalizadas e do tipo RLS dos algoritmos do tipo CMA estudados para equalizaÃÃo de canais nÃo-lineares. O comportamento de todas as famÃlias de algoritmos desenvolvidos à testado atravÃs de simulaÃÃes computacionais, em que à verificado que as tÃcnicas propostas fornecem ganhos significativos em desempenho, em termos de velocidade de convergÃncia e erro residual, em relaÃÃo Ãs tÃcnicas clÃssicas.
This work studies and proposes algorithms to perform blind equalization of linear and nonlinear channels inspired on the Constant Modulus Algorithm (CMA). The CMA works very well for modulations in which all points of the signal constellation have the same radius, like in Phase Shift Keying (PSK) modulations. However, when the constellation points are characterized by multiple radii, like in Quadrature Amplitude Modulation (QAM) signals, the CMA does not work properly in many situations. Thus, the techniques proposed here are designed to improve the performance of the CMA, in terms of speed of convergence and residual error, when working with signals transmitted with multiple magnitude, in particular with QAM signals. As well as for the CMA, these techniques should have a good compromise among performance, complexity and robustness. To do so, the techniques use the last decided symbol to estimate reference radius to the output of the equalizer. In fact, they can be seen as modifications of the CMA and of some of its derivatives for constellations with multiple radii. The proposition of stochastic gradient algorithms is concluded with the development of new adaptive blind techniques to equalize channels with a Wiener structure. A Wiener filter consists of a linear block with memory followed by a memoryless nonlinearity, by using the CMA. We develop expressions for the adaptation of the equalizer using a unified notation for three different equalizer filter structures: i) a Hammerstein filter, ii) a diagonal Volterra filter and iii) a Volterra filter. A theoretical analysis of the main proposed technique, the Decision Directed Modulus Algorithm (DDMA), is also done. We study the convergence and the stability of the DDMA by means of an analysis of the minima of the DDM cost function. We also develop an analytic expression for the Excess Mean Square Error (EMSE) provided by the DDMA in the noiseless case. Then, we nd some interesting relationships among the DDM, the CM and the Wiener cost functions. We also develop a class of normalized algorithms and a class of Recursive Least Squares (RLS)-type algorithms for blind equalization inspired on the CMA-based techniques studied. Each family is composed of four algorithms with desirable properties and advantages over the original CM algorithms, specially when working with high-level QAM signals. Normalized and RLS techniques for equalization of Wiener channels are also developed. The behavior of the proposed classes of algorithms discussed is tested by computational simulations. We verify that the proposed techniques provide significative gains in performance, in terms of speed of convergence and residual error, when compared to the classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
32

Dobson, William Keith. "Method for Improving the Efficiency of Image Super-Resolution Algorithms Based on Kalman Filters." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/82.

Full text
Abstract:
The Kalman Filter has many applications in control and signal processing but may also be used to reconstruct a higher resolution image from a sequence of lower resolution images (or frames). If the sequence of low resolution frames is recorded by a moving camera or sensor, where the motion can be accurately modeled, then the Kalman filter may be used to update pixels within a higher resolution frame to achieve a more detailed result. This thesis outlines current methods of implementing this algorithm on a scene of interest and introduces possible improvements for the speed and efficiency of this method by use of block operations on the low resolution frames. The effects of noise on camera motion and various blur models are examined using experimental data to illustrate the differences between the methods discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Thouzé, Arsène. "Méthode numérique d'estimation du mouvement des masses molles." Thèse, Poitiers, 2013. http://hdl.handle.net/1866/10763.

Full text
Abstract:
L’analyse biomécanique du mouvement humain en utilisant des systèmes optoélectroniques et des marqueurs cutanés considère les segments du corps comme des corps rigides. Cependant, le mouvement des tissus mous par rapport à l'os, c’est à dire les muscles et le tissu adipeux, provoque le déplacement des marqueurs. Ce déplacement est le fait de deux composantes, une composante propre correspondant au mouvement aléatoire de chaque marqueur et une composante à l’unisson provoquant le déplacement commun des marqueurs cutanés lié au mouvement des masses sous-jacentes. Si nombre d’études visent à minimiser ces déplacements, des simulations ont montré que le mouvement des masses molles réduit la dynamique articulaire. Cette observation est faite uniquement par la simulation, car il n'existe pas de méthodes capables de dissocier la cinématique des masses molles de celle de l’os. L’objectif principal de cette thèse consiste à développer une méthode numérique capable de distinguer ces deux cinématiques. Le premier objectif était d'évaluer une méthode d'optimisation locale pour estimer le mouvement des masses molles par rapport à l’humérus obtenu avec une tige intra-corticale vissée chez trois sujets. Les résultats montrent que l'optimisation locale sous-estime de 50% le déplacement des marqueurs et qu’elle conduit à un classement de marqueurs différents en fonction de leur déplacement. La limite de cette méthode vient du fait qu'elle ne tient pas compte de l’ensemble des composantes du mouvement des tissus mous, notamment la composante en unisson. Le second objectif était de développer une méthode numérique qui considère toutes les composantes du mouvement des tissus mous. Plus précisément, cette méthode devait fournir une cinématique similaire et une plus grande estimation du déplacement des marqueurs par rapport aux méthodes classiques et dissocier ces composantes. Le membre inférieur est modélisé avec une chaine cinématique de 10 degrés de liberté reconstruite par optimisation globale en utilisant seulement les marqueurs placés sur le pelvis et la face médiale du tibia. L’estimation de la cinématique sans considérer les marqueurs placés sur la cuisse et le mollet permet d'éviter l’influence de leur déplacement sur la reconstruction du modèle cinématique. Cette méthode testée sur 13 sujets lors de sauts a obtenu jusqu’à 2,1 fois plus de déplacement des marqueurs en fonction de la méthode considérée en assurant des cinématiques similaires. Une approche vectorielle a montré que le déplacement des marqueurs est surtout dû à la composante à l’unisson. Une approche matricielle associant l’optimisation locale à la chaine cinématique a montré que les masses molles se déplacent principalement autour de l'axe longitudinal et le long de l'axe antéro-postérieur de l'os. L'originalité de cette thèse est de dissocier numériquement la cinématique os de celle des masses molles et les composantes de ce mouvement. Les méthodes développées dans cette thèse augmentent les connaissances sur le mouvement des masses molles et permettent d’envisager l’étude de leur effet sur la dynamique articulaire.
Biomechanical analysis of human movement using optoelectronic system and skin markers considers body segments as rigid bodies. However the soft tissue motion relative to the bone, including muscles, fat mass, results in relative displacement of markers. This displacement is the results of two components, an own component which corresponds to a random motion of each marker and an in-unison component corresponding to the common movement of skin markers resulting from the movement of the underlying wobbling mass. While most studies aim to minimize these displacements, computer simulation models have shown that the movement of the soft tissue motion relative to the bones reduces the joint kinetics. This observation is only available using computer simulations because there are no methods able to distinguish the kinematics of wobbling mass of the bones kinematics. The main objective of this thesis is to develop a numerical method able to distinguish this different kinematics. The first aim of this thesis was to assess a local optimisation method for estimating the soft tissue motion using intra-cortical pins screwed into the humerus in three subjects. The results show that local optimisation underestimates of 50% the marker displacements. Also it leads to a different marker ranking in terms of displacement. The limit of local optimisation comes from the fact that it does not consider all the components of the soft tissue motion, especially the in-unison component. The second aim of this thesis was to develop a numerical method that accounts for all the component of the soft tissue motion. More specifically, this method should provide similar kinematics and estimate large marker displacement and distinguish the two components to conventional approaches. The lower limb is modeled using a 10 degree of freedom chain model reconstructed using global optimisation and the markers placed only on the pelvis and the medial face of the shank. The original estimate of joint kinematics without considering the markers placed on the thigh and on the calf avoids the influences of these markers displacement on the kinematic model reconstruction. This method was tested on 13 subjects who performed hopping trials and obtained up to 2.1 times of marker displacement depending the method considered ensuring similar joint-kinematics. A vector approach shown that marker displacements is more induce by the in-unison component. A matrix approach combining the local optimisation and the kinematic model shown that the wobbling mass moves around the longitudinal axis and along the antero-posterior axis of the bone. The originality of this thesis is to numerically distinguish the bone kinematics from the wobbling mass kinematics and the two components of the soft tissue motion. The methods developed in this thesis increases the knowledge on soft tissue motion and allow future studies to consider their movement in joint kinetics calculation.
APA, Harvard, Vancouver, ISO, and other styles
34

Skoglund, Ingegerd. "Algorithms for a Partially Regularized Least Squares Problem." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rossi, Michel. "Iterative least squares algorithms for digital filter design." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/10099.

Full text
Abstract:
In this thesis, we propose new algorithms to simplify and improve the design of IIR digital filters and M-band cosine modulated filter banks. These algorithms are based on the Iterative Least Squares (ILS) approach. We first review the various Iterative Reweighted Least Squares (IRLS) methods used to design Chebyshev and $L\sb{p}$ linear phase FIR filters. Then we focus on the ILS design of IIR filters and filter banks. For the design of Chebyshev IIR filters in the log magnitude sense, we propose a Remez-type IRLS algorithm. This novel approach accelerates significantly Kobayashi's and Lim's IRLS methods and simplifies the traditional rational Remez algorithm. For the design of M-band cosine modulated filter banks, we propose three new ILS algorithms. These algorithms are specific to the design of Pseudo Quadrature Mirror Filter (QMF) banks, Near Perfect Reconstruction (NPR) Pseudo QMF banks and Perfect Reconstruction (PR) QMF banks. They are fast convergent, simple to implement and flexible compared to traditional nonlinear optimization methods. Short MATLAB programs implementing the proposed algorithms are included.
APA, Harvard, Vancouver, ISO, and other styles
36

Oddsdóttir, Hildur Æsa. "Macroscopic Modeling of Metabolic Reaction Networks and Dynamic Identification of Elementary Flux Modes by Column Generation." Doctoral thesis, KTH, Optimeringslära och systemteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172367.

Full text
Abstract:
In this work an intersection between optimization methods and animal cell culture modeling is considered. We present optimization based methods for analyzing and building models of cell culture; models that could be used when designing the environment cells are cultivated in, i.e., medium. Since both the medium and cell line considered are complex, designing a good medium is not straightforward. Developing a model of cell metabolism is a step in facilitating medium design. In order to develop a model of the metabolism the methods presented in this work make use of an underlying metabolic reaction network and extracellular measurements. External substrates and products are connected via the relevant elementary flux modes (EFMs). Modeling from EFMs is generally limited to small networks, because the number of EFMs explodes when the underlying network size increases. The aim of this work is to enable modeling with more complex networks by presenting methods that dynamically identify a subset of the EFMs. In papers A and B we consider a model consisting of the EFMs along with the flux over each mode. In paper A we present how such a model can be decided by an optimization technique named column generation. In paper B the robustness of such a model with respect to measurement errors is considered. We show that a robust version of the underlying optimization problem in paper A can be formed and column generation applied to identify EFMs dynamically. In papers C and D a kinetic macroscopic model is considered. In paper C we show how a kinetic macroscopic model can be constructed from the EFMs. This macroscopic model is created by assuming that the flux along each EFM behaves according to Michaelis-Menten type kinetics. This modeling method has the ability to capture cell behavior in varied types of media, however the size of the underlying network is a limitation. In paper D this limitation is countered by developing an approximation algorithm, that can dynamically identify EFMs for a kinetic model.
I denna avhandling betraktar vi korsningen mellan optimeringsmetoder och modellering av djurcellodling.Vi presenterar optimeringsbaserade metoder för att analysera och bygga modeller av cellkulturer. Dessa modeller kan användas vid konstruktionen av den miljö som cellerna ska odlas i, dvs, medium.Eftersom både mediet och cellinjen är komplexa är det inte okomplicerat att utforma ett bra medium. Att utveckla en modell av cellernas ämnesomsättning är ett steg för att underlätta designen av mediet. För att utveckla en modell av metabolismen kommer de metoder som används i detta arbete att utnyttja ett underliggande metaboliskt reaktions\-nätverk och extracellulära mätningar. Externa substrat och produkter är sammankopplade via de relevanta elementära metaboliska vägarna (EFM).Modellering med hjälp av EFM är i allmänhet begränsad till små nätverk eftersom antalet EFM exploderar när de underliggande nätverket ökar i storlek. Målet med detta arbete är att möjliggöra modellering med mer komplexa nätverk genom att presentera metoder som dynamiskt identifierar en delmängd av EFM. I artikel A och B betraktar vi en modell som består av EFM och ett flöde över varje EFM.I artikel A presenterar vi hur en sådan modell kan bestämmas med hjälp av en optimeringsteknik som kallas kolumngenerering.I artikel A undersöker vi hur robust en sådan modell är med avseende till mätfel. Vi visar att en robust version av det underliggande optimeringsproblemet i artikel A kan konstrueras samt att kolumngenerering kan appliceras för att identifiera EFM dynamiskt. Artikel C och D behandlar en kinetisk makroskopisk modell. Vi visar i artikel C hur en sådan modell kan konstrueras från EFM.Denna makroskopiska modell är skapad genom att anta att flödet genom varje EFM beter sig enligt Michaelis-Menten-typ av kinetik. Denna modelleringsmetod har förmågan att fånga cellernas beteende i olika typer av media, men storleken på nätverket är en begränsning.I artikel D hanterar vi denna begränsing genom att utveckla en approximationsalgoritm som identifierar EFM dynamiskt för en kinetisk modell.

QC 20150827

APA, Harvard, Vancouver, ISO, and other styles
37

Santos, Samuel Batista dos. "Estudo de algoritmos adaptativos aplicados a redes de sensores sem fio : caso supervisionado e não supervisionado." reponame:Repositório Institucional da UFABC, 2014.

Find full text
Abstract:
Orientadora: Profa. Dra. Aline de Oliveira Neves Panazio
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014.
Redes de sensores sem o (WSN - Wireless Sensor Networks) têm sido usadas na observação de fenômenos, identicação de sistemas, equalização de canais, além de aplicações nas mais diversas áreas. Considerando o caso de redes homogêneas com protocolo ponto a ponto, nas quais os sensores são capazes de processar suas informações e se comunicar com sensores vizinhos, diversos algoritmos adaptativos vêm sendo aplicados no processamento dos dados medidos. Estes algoritmos podem ser supervisionados ou não supervisionados. Buscando estimar parâmetros comuns através de um processamento distribuído, a topologia da rede passa a ser uma característica importante e precisa ser levada em conta nos algoritmos utilizados. Tais algoritmos operam em modo de difusão, considerando a troca de informações entre sensores vizinhos na atualização dos coecientes dos ltros adaptativos de cada sensor. O mapeamento da topologia da rede é feito de forma matricial através das chamadas matrizes de combinação. Neste trabalho, estudamos o impacto da escolha da matriz de combinação no desempenho dos algoritmos supervisionados. No caso de algoritmos não supervisionados, como a única proposta encontrada na literatura considerava um caso bastante restrito em que o algoritmo só poderia ser aplicado a uma rede com topologia em anel e comunicação unidirecional entre os nós, propomos um novo algoritmo capaz de operar em modo de difusão em qualquer topologia, baseado no clássico critério do módulo constante. O algoritmo proposto é simulado em diversas situações, sempre apresentando vantagens em relação a uma rede sem cooperação entre os nós.
Wireless sensor networks (WSN) have been used in the observation of several phenomena, system identication, channel equalization, and others. Considering the case of homogeneous networks with point to point protocol, in which the sensors are able to process their information and communicate with neighbors, various adaptive algorithms have been applied in the processing of measured data. These algorithms can be supervised or unsupervised. Seeking to estimate common parameters across a distributed processing, network topology becomes an important feature and must be taken into account in the algorithms used. Such algorithms operate in diusion mode, that is, considering the exchange of information between sensors to update the coecients of the adaptive lters. Thenetwork topology is mapped through the use of a matrix, denoted combination matrix. In this work, we study the impact of the choice of the combination matrix on the performance of supervised algorithms. In the case of blind methods, the only technique found in the literature was applied to the specic case of a network with ring topology and unidirectional communication between nodes. Thus, we propose a new algorithm capable of operating in diusion mode on any topology, based on the classical constant modulus criterion. The proposed algorithm is simulated in several scenarios, always presenting advantages over a network without cooperation between nodes.
APA, Harvard, Vancouver, ISO, and other styles
38

Jacmenovic, Dennis, and dennis_jacman@yahoo com au. "Optimisation of Active Microstrip Patch Antennas." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060307.144507.

Full text
Abstract:
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
APA, Harvard, Vancouver, ISO, and other styles
39

Djaneye-Boundjou, Ouboti Seydou Eyanaa. "Discrete-time Concurrent Learning for System Identification and Applications: Leveraging Memory Usage for Good Learning." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton151298579862899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Roosta-Khorasani, Farbod. "Randomized algorithms for solving large scale nonlinear least squares problems." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52663.

Full text
Abstract:
This thesis presents key contributions towards devising highly efficient stochastic reconstruction algorithms for solving large scale inverse problems, where a large data set is available and the underlying physical systems is complex, e.g., modeled by partial differential equations (PDEs). We begin by developing stochastic and deterministic dimensionality reduction methods to transform the original large dimensional data set into the one with much smaller dimensions for which the computations are more manageable. We then incorporate such methods in our efficient stochastic reconstruction algorithms. In the presence of corrupted or missing data, many of such dimensionality reduction methods cannot be efficiently used. To alleviate this issue, in the context of PDE inverse problems, we develop and mathematically justify new techniques for replacing (or filling) the corrupted (or missing) parts of the data set. Our data replacement/completion methods are motivated by theory in Sobolev spaces, regarding the properties of weak solutions along the domain boundary. All of the stochastic dimensionality reduction techniques can be reformulated as Monte-Carlo (MC) methods for estimating the trace of a symmetric positive semi-definite (SPSD) matrix. In the next part of the present thesis, we present some probabilistic analysis of such randomized trace estimators and prove various computable and informative conditions for the sample size required for such Monte-Carlo methods in order to achieve a prescribed probabilistic relative accuracy. Although computationally efficient, a major drawback of any (randomized) approximation algorithm is the introduction of “uncertainty” in the overall procedure, which could cast doubt on the credibility of the obtained results. The last part of this thesis consists of uncertainty quantification of stochastic steps of our approximation algorithms presented earlier. As a result, we present highly efficient variants of our original algorithms where the degree of uncertainty can easily be quantified and adjusted, if needed. The uncertainty quantification presented in the last part of the thesis is an application of our novel results regarding the maximal and minimal tail probabilities of non-negative linear combinations of gamma random variables which can be considered independently of the rest of this thesis.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Huo, Jia Q. "Numerical properties of adaptive recursive least-squares (RLS) algorithms with linear constraints." Curtin University of Technology, Australian Telecommunications Research Institute, 1999. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10094.

Full text
Abstract:
Adaptive filters have found applications in many signal processing problems. In some situations, linear constraints are imposed on the filter weights such that the filter is forced to exhibit a certain desired response. Several algorithms for linearly constrained least-squares adaptive filtering have been developed in the literature. When implemented with finite precision arithmetic, these algorithms are inevitably subjected to rounding errors. It is essential to understand how these algorithms react to rounding errors.In this thesis, the numerical properties of three linearly constrained least-squares adaptive filtering algorithms, namely, the linearly constrained fast least algorithm, the linear systolic array for MVDR beamforming and the linearly constrained QRD-RLS algorithm, are studied. It is shown that all these algorithms can be separated into a constrained part and an unconstrained part. The numerical properties of unconstrained least-squares algorithms (i.e., the unconstrained part of the linearly constrained algorithms under study) are reviewed from the perspectives of error propagation, error accumulation and numerical persistency. It is shown that persistent excitation and sufficient numerical resolution are needed to ensure the stability of the CRLS algorithm, while the QRD-RLS algorithm is unconditionally stable. The numerical properties of the constrained algorithms are then examined. Based on the technique of how the constraints are applied, these algorithms can be grouped into two categories. The first two algorithms admit a similar structure in that the unconstrained parts preceed the constrained parts. Error propagation analysis shows that this structure gives rise to unstable error propagation in the constrained part. In contrast, the constrained part of the third algorithm preceeds the unconstrained part. It is shown that this algorithm gives an ++
exact solution to a linearly constrained least-squares adaptive filtering problem with perturbed constraints and perturbed input data. A minor modification to the constrained part of the linearly constrained QRD-RLS algorithm is proposed to avoid a potential numerical difficulty due to the Gaussian elimination operation employed in the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Parreira, Wemerson Delcio. "Comportamento estocástico do algoritmo kernel least-mean-square." Florianópolis, 2012. http://repositorio.ufsc.br/xmlui/handle/123456789/99402.

Full text
Abstract:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia Elétrica.
Made available in DSpace on 2013-03-04T20:02:58Z (GMT). No. of bitstreams: 1 307884.pdf: 2324151 bytes, checksum: e8836a0a1ca734d1939b5144cef51992 (MD5)
Algoritmos baseados em kernel têm-se tornado populares no processamento não-linear de sinais. O processamento não-linear aplicado sobre um sinal pode ser modelado como um processamento linear aplicado a um sinal transformado para um espaço de Hilbert com kernels reprodutivos (RKHS). A operação linear no espaço transformado pode ser implementada com baixa complexidade e pode ser melhor estudada e projetada. O algoritmo Kernel Least-Mean-Squares (KLMS) é um algoritmo popular em filtragem adaptativa não-linear devido à sua simplicidade e robustez. Implementações práticas desse algoritmo requerem um modelo de ordem finita do processamento não-linear, o que modifica o comportamento do algoritmo em relação ao LMS simplesmente mapeado para o RKHS. Essa modificação leva à necessidade de novos modelos analíticos para o comportamento do algoritmo. O desempenho do algoritmo é função do passo de convergência e dos parâmetros do kernel empregado. Este trabalho estuda o comportamento do KLMS em regimes transitório e permanente para entradas Gaussianas e um modelo de não-linearidade de ordem finita. Dois kernels são considerados; o Gaussiano e o Polinomial. Derivamos modelos analíticos recursivos para os comportamentos do vetor médio de erros nos coeficientes e do erro quadrático médio de estimação. As previsões do modelo mostram excelente acordo com simulações de Monte Carlo no transitório e no regime permanente. Isso permite a determinação explícita das condições para a estabilidade, e permite escolher os parâmetros do algoritmo a fim de obter um desempenho desejado. Exemplos de projeto são apresentados para o kernel Gaussiano e para o kernel Polinomial de segundo grau de forma a validar a análise teórica e ilustrar sua aplicação.
Kernel-based algorithms have become popular in nonlinear signal processing. A nonlinear processing can be modeled as a linear processing applied to a signal transformed to a reproducing kernel Hilbert space (RKHS). The linear operation in the transformed space can be implemented with low computational complexity and can be more easily studied and designed. The Kernel Least-Mean-Squares (KLMS) is a popular algorithm in nonlinear adaptive filtering due to its simplicity and robustness. Practical implementations of this algorithm require a finite order model for the nonlinear processing. This modifies the algorithm behavior as compared to the LMS simply mapped to the RKHS. This modification leads to the need for new analytical models for the algorithm behavior. The algorithm behavior is a function of both the step size and the kernel parameters. This work studies the KLMS algorithm behavior in transient and in steady-state for Gaussian inputs and for a finite order nonlinearity model. Two kernels are considered; the Gaussian and the Polinomial. We derive analytical models for the behavior of both the mean weight error vector and the mean-square estimation error. The model predictions show excellent agreement with Monte Carlo simulations at both the transient and the steady-state. This allows the explicit determination of the stability limits and to design the algorithm parameters to obtain a desired performance. Design examples are presented for the Gaussian and for the second degree Polinomial kernels to validate the analysis and to illustrate its application.
APA, Harvard, Vancouver, ISO, and other styles
43

Strohe, Hans Gerhard, and Frank Geppert. "DPLS : Algorithmus und Computerprogramm für dynamische Partial-Least-Squares-Modelle." Universität Potsdam, 1997. http://opus.kobv.de/ubp/volltexte/2010/4904/.

Full text
Abstract:
Lineare Modelle mit latenten Variablen sind seit langem verbreitete Analyse- und Prognoseinstrumente in den Sozialwissenschaften. Auch in der Ökonometrie gibt es einige Anwendungen. Die meistverbreiteten Modellierungs- und Schätzverfahren sind LISREL von Jöreskog und Sörbom (z.B. 1987) und Partial Least Squares (PLS) von H. Wold (1973). Während LISREL mehr modellorientiert und in der Anwendung konfirmativ ist, kann man PLS als datenorientiert und eher deskriptiv oder explorativ bezeichnen. Charakteristisch für Wolds Herangehen ist, daß das PLS-Modell eigentlich nur durch den Algorithmus zu seiner Schätzung definiert wird. Das umfassendste Programmsystem für PLS ist LVPLS von J. B. Lohmöller (1984). Es lehnt sich sehr eng an die Theorie von Wold an und ist trotz mangelnden Nutzerkomforts in seiner Vielseitigkeit und Zuverlässigkeit unübertroffen. Weder Wolds Verfahren noch Lohmöllers Programm sehen die Anwendung auf dynamische Modelle, etwa VARs, explizit vor. Die Einbeziehung verzögerter Variablen ist nur in Form selbständiger Variablen möglich, was zu Inkonsistenzen bei der Gewichtung führt. Im folgenden zweiten Abschnitt wird ein Verfahren skizziert (vgl. Strohe 1995), das sich einerseits sehr eng an den Woldschen Algorithmus anlehnt, das aber andererseits speziell auf die Behandlung von dynamischen Modellen mit verzögerten latenten Variablen ausgerichtet ist. Der dritte Abschnitt bringt dann eine Einführung in das entsprechende ISP™ Computerprogramm DPLS (vgl. Geppert 1995). Er besteht aus einer allgemeinen Programmbeschreibung und einer detaillierten Nutzeranleitung. Hinzu kommt die Bearbeitung eines kleinen ökonometrischen Demonstrationsmodells. Im vierten Abschnitt werden mit einer Simulationsstudie die Eigenschaften des Schätzverfahrens DPLS unter verschiedenen Verteilungsannahmen geprüft. Der Anhang bringt die vollständigen Listings der kommentierten Programm-Macros.
APA, Harvard, Vancouver, ISO, and other styles
44

Gulliksson, Mårten. "Algorithms for overdetermined systems of equations." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 1993. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-111107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jenča, Pavol. "Identifikace parametrů elektrických motorů metodou podprostorů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219678.

Full text
Abstract:
The electrical motors parameters identification is solved in this master’s thesis using subspace based methods. Electrical motors are simulated in Matlab/Simulink interactive environment, specifically permanent magnet DC motor and permanent magnet synchronous motor. Identification is developed in Matlab interactive environment. Different types of subspace algorithms are used for the estimation of parameters. Results of subspace parameters estimation are compared with least squares parameters estimation. The thesis describes subspace method, types of subspace algorithms, used electrical motors, nonlinear approach of identification and comparation of parameters identification.
APA, Harvard, Vancouver, ISO, and other styles
46

Clausner, André. "Möglichkeiten zur Steuerung von Trust-Region Verfahren im Rahmen der Parameteridentifikation." Thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114847.

Full text
Abstract:
Zur Simulation technischer Prozesse ist eine hinreichend genaue Beschreibung des Materialverhaltens notwendig. Die hierfür häufig verwendeten phänomenologischen Ansätze, wie im vorliegenden Fall die HILLsche Fließbedingung, enthalten materialspezifische Parameter, welche nicht direkt messbar sind. Die Identifikation dieser Materialparameter erfolgt in der Regel durch Minimierung eines Fehlerquadratfunktionals, welches Differenzen von Messwerten und zugehörigen numerisch berechneten Vergleichswerten enthält. In diesem Zusammenhang haben sich zur Lösung dieser Minimierungsaufgabe die Trust-Region Verfahren als gut geeignet herausgestellt. Die Aufgabe besteht darin, die verschiedenen Möglichkeiten zur Steuerung eines Trust-Region Verfahrens, im Hinblick auf die Eignung für das vorliegende Identifikationsproblem, zu untersuchen. Dazu werden die Quadratmittelprobleme und deren Lösungsverfahren überblicksmäßig betrachtet. Danach wird näher auf die Trust-Region Verfahren eingegangen, wobei sich im Weiteren auf Verfahren mit positiv definiten Ansätzen für die Hesse-Matrix, den Levenberg-Marquardt Verfahren, beschränkt wird. Danach wird ein solcher Levenberg-Marquardt Algorithmus in verschiedenen Ausführungen implementiert und an dem vorliegenden Identifikationsproblem getestet. Als Ergebnis stellt sich eine gute Kombination aus verschiedenen Teilalgorithmen des Levenberg-Marquardt Algorithmus mit einer hohen Konvergenzgeschwindigkeit heraus, welche für das vorliegende Problem gut geeignet ist.
APA, Harvard, Vancouver, ISO, and other styles
47

Soussen, Charles. "Algorithmes d'approximation parcimonieuse inspirés d'Orthogonal Least Squares pour les problèmes inverses." Habilitation à diriger des recherches, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00924578.

Full text
Abstract:
Ce manuscrit synthétise mon activité de recherche au CRAN entre 2005 et 2013. Les projets menés s'inscrivent dans les domaines des problèmes inverses en traitement du signal et des images, de l'approximation parcimonieuse, de l'analyse d'images hyperspectrales et de la reconstruction d'images 3D. Je détaille plus particulièrement les travaux concernant la conception, l'analyse et l'utilisation d'algorithmes d'approximation parcimonieuse pour des problèmes inverses caractérisés par un dictionnaire mal conditionné. Dans un premier chapitre, je présente les algorithmes heuristiques conçus pour minimiser des critères mixtes L2-L0. Ce sont des algorithmes gloutons << bidirectionnels >> définis en tant qu'extension de l'algorithme Orthogonal Least Squares (OLS). Leur développement est motivé par le bon comportement empirique d'OLS et de ses versions dérivées lorsque le dictionnaire est une matrice mal conditionnée. Le deuxième chapitre est une partie applicative en microscopie de force atomique, où les algorithmes du premier chapitre sont utilisés avec un dictionnaire particulier dans le but de segmenter automatiquement des signaux. Cette segmentation permet finalement de fournir une cartographie 2D de différents paramètres électrostatiques et bio-mécaniques. Le troisième chapitre est une partie théorique visant à analyser les algorithmes gloutons OMP (Orthogonal Matching Pursuit) et OLS. Une première analyse de reconstruction exacte par OLS en k itérations est proposée. De plus, une comparaison poussée des conditions de reconstruction exacte lorsqu'un certain nombre d'itérations ont déjà été effectuées fournit un éclairage sur le meilleur comportement d'OLS (par rapport à OMP) pour les problèmes mal conditionnés. Dans un quatrième chapitre, je dresse quelques perspectives méthodologiques et appliquées dans le domaine de l'analyse parcimonieuse en lien avec les chapitres précédents.
APA, Harvard, Vancouver, ISO, and other styles
48

Karlsson, Erlendur. "Least squares arma modeling of linear time-varying systems : lattice filter structures and fast RLS algorithms." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/15936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Viklands, Thomas. "Algorithms for the Weighted Orthogonal Procrustes Problem and other Least Squares Problems." Doctoral thesis, Umeå : Umeå universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Siemes, Christian [Verfasser]. "Digital filtering algorithms for decorrelation within large least squares problems / Christian Siemes." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1205462678/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography