To see the other types of publications on this topic, follow the link: Phasor Estimation Algorithms.

Dissertations / Theses on the topic 'Phasor Estimation Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Phasor Estimation Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guo, Hengdao. "Frequency Tracking and Phasor Estimation Using Least Squares and Total Least Squares Algorithms." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/57.

Full text
Abstract:
System stability plays an important role in electric power systems. With the development of electric power system, the scale of the electric grid is now becoming larger and larger, and many renewable energy resources are integrated in the grid. However, at the same time, the stability and safety issues of electric power system are becoming more complicated. Frequency and phasors are two critical parameters of the system stability. Obtaining these two parameters have been great challenges for decades. Researchers have provided various kinds of algorithms for frequency tracking and phasor estimation. Among them, Least Squares (LS) algorithm is one of the most commonly used algorithm. This thesis studies the LS algorithm and the Total Least Squares (TLS) algorithm working on frequency tracking and phasor estimation. In order to test the performance of the two algorithms, some simulations have been made in the Matlab. The Total Vector Error (TVE) is a commonly used performance criteria, and the TVE results of the two algorithms are compared. The TLS algorithm performs better than LS algorithm when the frequencies of all harmonic components are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Kamireddy, Srinath. "Comparison of state estimation algorithms considering phasor measurement units and major and minor data loss." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-11072008-121521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vigliassi, Marcos Paulo. "Algoritmo evolutivo multiobjetivo em tabelas e matriz HΔ para projeto de sistemas de medição para estimação de estado." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-19052017-154501/.

Full text
Abstract:
O problema de projeto de sistemas de medição, para efeito de Estimação de Estado em Sistemas Elétricos de Potência, é um problema de otimização multiobjetivo, combinatório, que exige a investigação de um grande número de possíveis soluções. Dessa forma, metaheurísticas vêm sendo empregadas para sua solução. Entretanto, a maioria delas trata o problema de forma mono-objetivo e as poucas que consideram uma formulação multiobjetivo, não contemplam todos os requisitos de desempenho que devem ser atendidos para obtenção de um Sistema de Medição Confiável (SMC) (observabilidade e ausência de Medidas Críticas, Conjuntos Críticos de Medidas, Unidades Terminais Remotas Críticas e Unidades de Medição Fasoriais Críticas). Propõe-se, nesta tese, uma formulação multiobjetivo para o problema de projeto de sistemas de medição de uma forma mais ampla, considerando todas requisitos de desempenho que devem ser atendidos para obtenção de um SMC. Propõe-se, ainda, o desenvolvimento e implantação, em computador, de um método para tratamento desse problema, considerando o trade-off entre os requisitos de desempenho e o custo, fazendo uso do conceito de Fronteira de Pareto. O método possibilita, em uma única execução, a obtenção de quatro tipos de sistemas de medição, a partir da análise de soluções não dominadas. O método permite o projeto de sistemas de medição novos e o aprimoramento de sistemas de medição já existentes, considerando a existência apenas de medidas convencionais SCADA, apenas de Medidas Fasoriais Sincronizadas ou a existência dos dois tipos de medidas. O método proposto faz uso de um Algoritmo Evolutivo Multiobjetivo e do procedimento de obtenção e análise da matriz HΔ. Esse procedimento permite a realização de uma Busca Local, minimizando o custo para atendimento de cada um dos requisitos de desempenho mencionados acima. Simulações são realizadas utilizando dados dos sistemas de 6, 14, 30, 118 e 300 barras do IEEE, bem como do sistema de 61 barras da Eletropaulo, de forma a ilustrar, testar e validar o método proposto. Alguns dos resultados dessas simulações são comparados com resultados obtidos por outros métodos encontrados na literatura.
Metering system planning for power system state estimation is a multi-objective, combinatorial optimization problem that may require the investigation of many possible solutions. As a consequence, meta-heuristics have been employed to solve the problem. However in the majority of them the multi-objective problem is converted in a mono-objective problem and those few considering a multi-objective formulation do not consider all the performance requirements that must be attended in order to obtain a Reliable Metering System (RMS) (system observability and absence of Critical Measurements, Critical Sets, Critical Remote Terminal Units and Critical Phasor Measurement Units). This thesis proposes a multi-objective formulation for the metering system planning problem in a wide way, that is, considering all the performance requirements that must be attended to obtain a RMS. This thesis also proposes the development and implementation, in computer, of a method to solve the metering system planning problem, considering the trade-off between the two conflicting objectives of the problem (minimizing cost while maximizing the performance requirements) making use of the concept of Pareto Frontier. The method allows, in only one execution, the project of four types of metering systems, from the analysis of non-dominated solutions. The method enable the design of new metering systems as well as the improvement of existing ones, considering the existence of only conventional SCADA measurements, or only synchronized phasor measurements or the existence of both types of measurements. The proposed method combines a multi-objective evolutionary algorithm based on subpopulation tables with the properties of the so-called HΔ matrix. The subpopulations tables adequately model several metering system performance requirements enabling a better exploration of the solution space. On the other hand, the properties of the HΔ matrix enable a local search that improves the evolutionary process and minimizes the computational effort. Simulations results with IEEE 6, 14, 30, 118 and 300-bus test systems and with a 61-bus system of Eletropaulo illustrate the efficiency of the proposed method. Some of the results of these simulations will be compared with those published in literature.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xuan. "High Precision Dynamic Power System Frequency Estimation Algorithm Based on Phasor Approach." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/31001.

Full text
Abstract:
An internet-based, real-time, Global Positioning System (GPS) ---synchronized relative to the wide-area frequency-monitoring network (FNET) ---has been developed at Virginia Tech. In this FNET system, an algorithm that employs the relationship between phasor angles and deviated frequency [13] is used to calculate both frequency and its rate of change. Tests of the algorithm disclose that, for non-pure sinusoidal input (as compared to pure sinusoidal input), significant errors in the output frequency will result. Three approaches for increasing the accuracy of the output frequency were compared. The first---increasing the number of samples per cycle N---proved ineffective. The second---using the average of the first estimated frequencies rather than the instant first estimated frequency as the resampling frequency---produces a moderate increase in accuracy of the frequency estimation. The third---multiple resampling---significantly increased accuracy. But both the second and the third become ineffective to the extent the input is not pure sinusoidal. From a practical standpoint, attention needs to be paid toward eliminating noise in the input data from the power grid so as to make it more purely sinusoidal. Therefore, it will be worthwhile to test more sophisticated digital filters for processing the input data before feeding it to the algorithm.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Hussain, Zahir M. "Adaptive instantaneous frequency estimation: Techniques and algorithms." Thesis, Queensland University of Technology, 2002. https://eprints.qut.edu.au/36137/7/36137_Digitised%20Thesis.pdf.

Full text
Abstract:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
APA, Harvard, Vancouver, ISO, and other styles
6

Deng, Zhi-De. "Stochastic chaos and thermodynamic phase transitions : theory and Bayesian estimation algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41649.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 177-200).
The chaotic behavior of dynamical systems underlies the foundations of statistical mechanics through ergodic theory. This putative connection is made more concrete in Part I of this thesis, where we show how to quantify certain chaotic properties of a system that are of relevance to statistical mechanics and kinetic theory. We consider the motion of a particle trapped in a double-well potential coupled to a noisy environment. By use of the classic Langevin and Fokker-Planck equations, we investigate Kramers' escape rate problem. We show that there is a deep analogy between kinetic rate theory and stochastic chaos, for which we propose a novel definition. In Part II, we develop techniques based on Volterra series modeling and Bayesian non-linear filtering to distinguish between dynamic noise and measurement noise. We quantify how much of the system's ergodic behavior can be attributed to intrinsic deterministic dynamical properties vis-a-vis inevitable extrinsic noise perturbations.
by Zhi-De Deng.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
7

Forbush, Taylor R. "Automated Delay Estimation at Signalized Intersections: Phase I Concept and Algorithm Development." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2471.

Full text
Abstract:
Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation in real-time using existing field traffic data collection technologies. This research has focused on method and algorithm development that can be applied to existing technologies. Two algorithms were developed to run automatically using Microsoft Excel and Visual Basic to calculate traffic delay from data collected from existing vehicle detection. The algorithms were developed using computer modeling software to simulate different lane configurations. The lane configurations tested were through-only lanes, through lanes with a midblock driveway, and through lanes with a turning bay. Different levels of volumes were simulated for each of the lane configurations. Results were promising for each lane configuration. The through-only configuration showed excellent results with maximum errors less than 3 seconds per vehicle for each test. The through lanes with the driveways test was evaluated using added detection at the driveway locations and no detection at the driveways. Results using the driveway sensors had 93 percent of the calculated average delays with less than 5 seconds per vehicle of error. Results without the driveway sensors had 84 percent of the calculated average delays with less than 5 seconds of error. Results for the turning bay configuration had 94 percent of the calculated turning bay results with less than 5 seconds per vehicle of error. It is recommended to conduct a hardware-in-loop analysis to make certain the algorithms developed in this study perform as expected in a dynamic operation.
APA, Harvard, Vancouver, ISO, and other styles
8

Robie, Taylor A. "Improved Electrolyte-NRTL Parameter Estimation Using a Combined Chemical and Phase Equilibrium Algorithm." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368027260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Marsolla, Rafael. "Estimação fasorial em tempo real utilizando um algoritmo genético compacto multiobjetivo." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-02062015-151039/.

Full text
Abstract:
A medição fasorial sincronizada é utilizada hoje como forma de aprimorar a operação de um Sistema Elétrico de Potência (SEP), empregando unidades de medição fasorial estrategicamente localizadas e instaladas. Estas realizam a aquisição do sinal elétrico e posteriormente a estimação dos fasores de tensão e corrente sincronizados no tempo, os quais indicam o comportamento do SEP em uma localidade específica. Este trabalho multidisciplinar propõe a análise e implementação de um método computacional evolutivo, o Algoritmo Genético Compacto Multiobjetivo (AGCM) aplicado ao problema de medição fasorial, amplamente utilizado por exemplo, no monitoramento de um SEP, comportando-se assim como uma unidade medidora de fasor, ou Phasor Measurement Unit (PMU). O AGCM aqui apresentado tem como principal característica a análise multiobjetiva do problema. Pelo fato de todo SEP ser trifásico, é proposto esta nova abordagem, onde é considerando para a estimação fasorial as três fases de forma conjunta, e não mais estimadas independentemente. Assim o AGCM proposto considera em seu mapeamento genético dos indivíduos, as características do sinais das três fases, diferentemente da abordagem mono-objetivo, onde cada fase do SEP é modelada sobre um indivíduo diferente. Posteriormente para garantir a eficácia do método evolutivo quando em operação em um cenário de tempo real, é proposto uma plataforma de aquisição de dados e processamento, inspirada em trabalhos anteriormente desenvolvidos, permitindo a integração de todos os módulos que formarão um PMU para análise fasorial em tempo real. Aqui um sistema de Global Positioning System (GPS) existente é proposto como forma de sincronismo entre os PMUs, sincronizando uma gama de equipamentos em um única referência de tempo, com a precisão necessária. Para auxiliar na integração dos módulos necessários, uma biblioteca de funções desenvolvida no LSEE será expandida permitindo a execução do método evolutivo diretamente em uma interface Field Programmable Gate Array (FPGA) a qual atuará como um coprocessador genético da plataforma de tempo real. Os resultados aqui apresentados foram obtidos seguindo especificações normativas, através de sinais gerados sinteticamente, e também utilizando o Alternative Transient Program (ATP), permitindo assim ensaios mais realísticos para a validação dos métodos evolutivos.
The synchronized phasor measurement is used today as a way to enhance the operation of an Electric Power System (EPS), using phasor measurement units strategically located and installed. They perform the acquisition of the electrical signal and then, the estimation of the voltage and current phasors, synchronized in time, which indicates the SEPs behavior in a specific location. This multidisciplinary work proposes the analysis and implementation of an evolutionary computing method, the Multibjective Compact Genetic Algorithm (MCGA) applied to the phasor estimation method used in EPS, known as an Phasor Measurement Units (PMUs). The MCGA presented here has as a main characteristic the multiobjective analysis of the problem. Because all EPSs have three phases, this new approach is proposed , which is considering the phasor estimation for the three phases together, instead of doing it for each phase independently.Thus the proposed MCGA includes in its genetic mapping of individuals, the characteristics of the signals of the three phases, unlike the monoobjective where each phase of the Electric Power System (EPS) is modeled using a different individual. In order to ensure the effectiveness of the evolutionary method when operating in a real time scenario, a platform for data acquisition and processing is proposed, inspired by previous work, allowing the integration of all the modules that composes a PMU for real-time phasor analysis. A Global Positioning System (GPS) is proposed as a way to synchronize different PMUs, integrating pieces of equipment in a single time reference, with the precision required. In order to assist in the integration of the required modules, a library of functions developed in the Laboratory of Electric Power Systems will be expanded allowing the execution of the evolutionary method directly on a Field Programmable Gate Array (FPGA) interface, which will act as a genetic co-processor of a real-time platform. The results presented here were obtained following normative specifications, through signals generated synthetically, and also using the Alternative Transient Program (ATP), allowing more realistic tests to validate the evolutionary methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmadi, Abhari Seyed Hamed. "Quantum Algorithms for: Quantum Phase Estimation, Approximation of the Tutte Polynomial and Black-box Structures." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5096.

Full text
Abstract:
In this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of black-box modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date, its physical implementation (algorithms based on quantum Fourier transform (QFT)) is highly constrained by the requirement of high-precision controlled phase shift operators, which remain difficult to realize. In the second part of this dissertation, we introduce an alternative approach to approximately implement QPE with arbitrary constant-precision controlled phase shift operators. The new quantum algorithm bridges the gap between QPE algorithms based on QFT and Kitaev's original approach. For approximating the eigenphase precise to the nth bit, Kitaev's original approach does not require any controlled phase shift operator. In contrast, QPE algorithms based on QFT or approximate QFT require controlled phase shift operators with precision of at least Pi/2n. The new approach fills the gap and requires only arbitrary constant-precision controlled phase shift operators. From a physical implementation viewpoint, the new algorithm outperforms Kitaev's approach. The other problem we investigate relates to approximating the Tutte polynomial. We show that the problem of approximately evaluating the Tutte polynomial of triangular graphs at the points (q,1/q) of the Tutte plane is BQP-complete for (most) roots of unity q. We also consider circular graphs and show that the problem of approximately evaluating the Tutte polynomial of these graphs at a point is DQC1-complete and at some points is in BQP. To show that these problems can be solved by a quantum computer, we rely on the relation of the Tutte polynomial of a planar G graph with the Jones and HOMFLY polynomial of the alternating link D(G) given by the medial graph of G. In the case of our graphs the corresponding links are equal to the plat and trace closures of braids. It is known how to evaluate the Jones and HOMFLY polynomial for closures of braids. To establish the hardness results, we use the property that the images of the generators of the braid group under the irreducible Jones-Wenzl representations of the Hecke algebra have finite order. We show that for each braid we can efficiently construct a braid such that the evaluation of the Jones and HOMFLY polynomials of their closures at a fixed root of unity leads to the same value and that the closures of the resulting braid are alternating links. The final part of the dissertation focuses on finding the structure of a black-box module or algebra. Suppose we are given black-box access to a finite module M or algebra over a finite ring R and a list of generators for M and R. We show how to find a linear basis and structure constants for M in quantum poly (log|M|) time. This generalizes a recent quantum algorithm of Arvind et al. which finds a basis representation for rings. We then show that our algorithm is a useful primitive allowing quantum computer to determine the structure of a finite associative algebra as a direct sum of simple algebras. Moreover, it solves a wide variety of problems regarding finite modules and rings. Although our quantum algorithm is based on Abelian Fourier transforms, it solves problems regarding the multiplicative structure of modules and algebras, which need not be commutative. Examples include finding the intersection and quotient of two modules, finding the additive and multiplicative identities in a module, computing the order of an module, solving linear equations over modules, deciding whether an ideal is maximal, finding annihilators, and testing the injectivity and surjectivity of ring homomorphisms. These problems appear to be exponentially hard classically.
ID: 031001318; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed March 27, 2013).; Thesis (Ph.D.)--University of Central Florida, 2012.; Includes bibliographical references (p. 82-86).
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Tiago Vieira da. "Algoritmos evolutivos como estimadores de frequência e fase de sinais elétricos: métodos multiobjetivos e paralelização em FPGAs." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-14012014-105606/.

Full text
Abstract:
Este trabalho propõe o desenvolvimento de Algoritmos Evolutivos (AEs) para estimação dos parâmetros que modelam sinais elétricos (frequência, fase e amplitude) em tempo-real. A abordagem proposta deve ser robusta a ruídos e harmônicos em sinais distorcidos, por exemplo devido à presença de faltas na rede elétrica. AEs mostram vantagens para lidar com tais tipos de sinais. Por outro lado, esses algoritmos quando implementados em software não possibilitam respostas em tempo-real para uso da estimação como relé de frequência ou Unidade de Medição Fasorial. O desenvolvimento em FPGA apresentado nesse trabalho torna possível paralelizar o cálculo da estimação em hardware, viabilizando AEs para análise de sinal elétrico em tempo real. Além disso, mostra-se que AEs multiobjetivos podem extrair informações não evidentes das três fases do sistema e estimar os parâmetros adequadamente mesmo em casos em que as estimativas por fase divirjam entre si. Em outras palavras, as duas principais contribuições computacionais são: a paralelização do AE em hardware por meio de seu desenvolvimento em um circuito de FPGA otimizado a nível de operações lógicas básicas e a modelagem multiobjetiva do problema possibilitando análises dos sinais de cada fase, tanto independentemente quanto de forma agregada. Resultados experimentais mostram superioridade do método proposto em relação ao estimador baseado em transformada de Fourier para determinação de frequência e fase
This work proposes the development of Evolutionary Algorithms (EAs) for the estimation of the basic parameters from electrical signals (frequency, phase and amplitude) in real time. The proposed approach must be robust to noise and harmonics in signals distorted, for example, due to the presence of faults in the electrical network. EAs show advantages for dealing with these types of signals. On the other hand, these algorithms when implemented in software cant produce real-time responses in order to use their estimations as frequency relay or Phasor Measurement Unit. The approach developed on FPGA proposed in this work parallelizes in hardware the process of estimation, enabling analyses of electrical signals in real time. Furthermore, it is shown that multi-objective EAs can extract non-evident information from the three phases of the system and properly estimate parameters even when the phase estimates diverge from each other. This research proposes: the parallelization of an EA in hardware through its design on FPGA circuit optimized at level of basic logic operations and the modeling of the problem enabling multi-objective analyses of the signals from each phase in both independent and aggregate ways. Experimental results show the superiority of the proposed method compared to an estimator based on Fourier transform for determining frequency and phase
APA, Harvard, Vancouver, ISO, and other styles
12

Xia, Tao. "Frequency Monitoring Network (FNET) Algorithm Improvements and Application Development." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/77296.

Full text
Abstract:
The Internet Based real-time GPS synchronized wide-area Frequency Monitoring Network (FNET) is an extremely low cost and quickly deployable wide-area frequency measurement system with high dynamic accuracy which consists of Frequency Disturbance Recorder (FDR) distributed to more than 100 places around North America and an Information Management System situated at Virginia Tech. Since its first FDR deployment in 2003, the FNET system has been proved to be able to reliably receive phasor data accurately measured at and instantaneously sent via the Internet from different locations of interest, and efficiently run the analyzing program to detect and record significant system disturbances and subsequently estimate the location of disturbance center, namely the event location, in the electric grid based on the information gathered. The excellent performance of the FNET system so far has made power grid situation awareness and monitoring based on distribution level frequency measurements a reality, and thus advances our understanding of power system dynamics to a higher level and in a broader dimensionality. Chapter 1 and Chapter 2 of this dissertation briefly introduce the genesis and the architecture of the FNET system, followed by a summary of its concrete implementations. Chapter 3 and Chapter 4 outline FNET frequency estimation algorithm and phase angle estimation algorithm, including their attributes and the new methodologies to enhance them. In Chapter 5, the report discusses the algorithms developed at FNET to detect the frequency disturbance and estimate the disturbance location by the triangulation procedure using real-time frequency data and geographic topology of the FNET units in the power grid where the disturbance occurs. Then, the dissertation proceeds to introduce the FNET angle-based power system oscillation detection and present some research about Matrix Pencil Modal Analysis of FNET phase angle oscillation data in the following two chapters. Lastly, the content of this report is summarized and the future work envisioned in Chapter 8.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Zeydan, Engin. "Channel Phase And Data Estimation In Slowly Fading Frequency Nonselective Channels." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607419/index.pdf.

Full text
Abstract:
In coherent receivers, the effect of the multipath fading channel on the transmitted signal must be estimated to recover the transmitted data. In this thesis, the channel phase and data estimation problems are investigated in a transmitted data sequence when the channel is modeled as slowly fading, frequency non-selective channel. Channel phase estimation in a transmitted data sequence is investigated and data estimation is obtained in a symbol-by-symbol MAP receiver that is designed for minimum symbol error probability criterion. The channel phase is quantized in an interval of interest, the trellis diagram is constructed and Viterbi decoding algorithm is applied that uses the phase transition and observation models for channel phase estimation. The optimum coherent and noncoherent detectors for binary orthogonal and PSK signals are derived and the modulated signals in a sequence are detected in symbol-by-symbol MAP receivers.Simulation results have shown that the performance of the receiver with phase estimation is between the performance of the optimum coherent and noncoherent receiver.
APA, Harvard, Vancouver, ISO, and other styles
14

Landon, Jonathan Charles. "Development of an Experimental Phased-Array Feed System and Algorithms for Radio Astronomy." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2794.

Full text
Abstract:
Phased array feeds (PAFs) are a promising new technology for astronomical radio telescopes. While PAFs have been used in other fields, the demanding sensitivity and calibration requirements in astronomy present unique new challenges. This dissertation presents some of the first astronomical PAF results demonstrating the lowest noise temperature and highest sensitivity at the time (66 Kelvin and 3.3 m^2/K, respectively), obtained using a narrowband (425 kHz bandwidth) prototype array of 19 linear co-polarized L-band dipoles mounted at the focus of the Green Bank 20 Meter Telescope at the National Radio Astronomy Observatory (NRAO) in Green Bank, West Virginia. Results include spectral line detection of hydroxyl (OH) sources W49N and W3OH, and some of the first radio camera images made using a PAF, including an image of the Cygnus X region. A novel array Y-factor technique for measuring the isotropic noise response of the array is shown along with experimental measurements for this PAF. Statistically optimal beamformers (Maximum SNR and MVDR) are used throughout the work. Radio-frequency interference (RFI) mitigation is demonstrated experimentally using spatial cancelation with the PAF. Improved RFI mitigation is achieved in the challenging cases of low interference-to-noise ratio (INR) and moving interference by combining subspace projection (SP) beamforming with a polynomial model to track a rank 1 subspace. Limiting factors in SP are investigated including sample estimation error, subspace smearing, noise bias, and spectral scooping; each of these factors is overcome with the polynomial model and prewhitening. Numerical optimization leads to the polynomial subspace projection (PSP) method, and least-squares fitting to the series of dominant eigenvectors over a series of short term integrations (STIs) leads to the eigenvector polynomial subspace projection (EPSP) method. Expressions for the gradient, Hessian, and Jacobian are given for use in numerical optimization. Results are given for simulated and experimental data, demonstrating deeper beampattern nulls by 6 to 30dB. To increase the system bandwidth toward the hundreds of MHz bandwidth required by astronomers for a fully science-ready instrument, an FPGA digital backend is introduced using a 64-input analog-to-digital converter running at 50 Msamp/sec and the ROACH processing board developed at the University of California, Berkeley. International efforts to develop digital back ends for large antenna arrays are considered, and a road map is proposed for development of a hardware correlator/beamformer at BYU using three ROACH boards communicating over 10 gigabit Ethernet.
APA, Harvard, Vancouver, ISO, and other styles
15

Eckstein, Adric. "Development of Robust Correlation Algorithms for Image Velocimetry using Advanced Filtering." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/36338.

Full text
Abstract:
Digital Particle Image Velocimetry (DPIV) is a planar measurement technique to measure the velocity within a fluid by correlating the motion of flow tracers over a sequence of images recorded with a camera-laser system. Sophisticated digital processing algorithms are required to provide a high enough accuracy for quantitative DPIV results. This study explores the potential of a variety of cross-correlation filters to improve the accuracy and robustness of the DPIV estimation. These techniques incorporate the use of the Phase Transform (PHAT) Generalized Cross Correlation (GCC) filter applied to the image cross-correlation. The use of spatial windowing is subsequently examined and shown to be ideally suited for the use of phase correlation estimators, due to their invariance to the loss of correlation effects. The Robust Phase Correlation (RPC) estimator is introduced, with the coupled use of the phase correlation and spatial windowing. The RPC estimator additionally incorporates the use of a spectral filter designed from an analytical decomposition of the DPIV Signal-to-Noise Ratio (SNR). This estimator is validated in a variety of artificial image simulations, the JPIV standard image project, and experimental images, which indicate reductions in error on the order of 50% when correlating low SNR images. Two variations of the RPC estimator are also introduced, the Gaussian Transformed Phase Correlation (GTPC): designed to optimize the subpixel interpolation, and the Spectral Phase Correlation (SPC): estimates the image shift directly from the phase content of the correlation. While these estimators are designed for DPIV, the methodology described here provides a universal framework for digital signal correlation analysis, which could be extended to a variety of other systems.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Fourt, Olivier. "Traitement des signaux à phase polynomiale dans des environnements fortement bruités : séparation et estimation des paramètres." Paris 11, 2008. http://www.theses.fr/2008PA112064.

Full text
Abstract:
Les travaux de cette thèse sont consacrés aux différents problèmes de traitement des Signaux à Phase Polynomiale dans des environnements fortement dégradés, que se soit par de fort niveaux de bruit ou par la présence de bruit impulsif, bruit que nous avons modélisé en ayant recourt à des lois alpha-stables. La robustesse au bruit est un sujet classique de traitement du signal et si de nombreux algorithmes sont capables de fonctionner avec de forts niveaux de bruits gaussiens, la présence de bruit impulsif a souvent pour conséquence une forte dégradation des performances voir une impossibilité d'utilisation. Récemment, plusieurs algorithmes ont été proposés pour prendre en compte la présence de bruit impulsif avec toutefois une contrainte: ces algorithmes voient généralement leurs performances se dégrader lorsqu'ils sont utilisés avec du bruit gaussien, et en conséquence nécessitent une sélection préalable de l'algorithme adapté en fonction de l'usage. L'un des points abordé dans cette thèse a donc été la réalisation d'algorithmes robustes à la nature du bruit en ce sens que leurs performances sont similaires, que le bruit additif soit gaussien ou alpha-stable. Le deuxième point abordé a été la réalisation d'algorithmes rapides, une capacité difficile à cumuler à la robustesse
The research works of this thesis deal with the processings of polynomial phase signals in heavily corrupted environnements, whatsoever noise with high levels or impulse noise, noise modelled by the use of alpha-stable laws. Noise robustness is a common task in signal processing and if several algorithms are able to work with high gaussian noise level, the presence of impulse noise often leads to a great loss in performances or makes algorithms unable to work. Recently, some algorithms have been built in order to support impulse noise environnements but with one limit: the achievable results decrease with gaussian noise situations and thus needs as a first step to select the good method versus the kind of the noise. So one of the key points of this thesis was building algorithms who were robust to the kind of the noise which means that they have similar performances with gaussian noise or alpha-stable noise. The second key point was building fast algorithms, something difficult to add to robustness
APA, Harvard, Vancouver, ISO, and other styles
17

Björk, Marcus. "Contributions to Signal Processing for MRI." Doctoral thesis, Uppsala universitet, Avdelningen för systemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246537.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is an important diagnostic tool for imaging soft tissue without the use of ionizing radiation. Moreover, through advanced signal processing, MRI can provide more than just anatomical information, such as estimates of tissue-specific physical properties. Signal processing lies at the very core of the MRI process, which involves input design, information encoding, image reconstruction, and advanced filtering. Based on signal modeling and estimation, it is possible to further improve the images, reduce artifacts, mitigate noise, and obtain quantitative tissue information. In quantitative MRI, different physical quantities are estimated from a set of collected images. The optimization problems solved are typically nonlinear, and require intelligent and application-specific algorithms to avoid suboptimal local minima. This thesis presents several methods for efficiently solving different parameter estimation problems in MRI, such as multi-component T2 relaxometry, temporal phase correction of complex-valued data, and minimizing banding artifacts due to field inhomogeneity. The performance of the proposed algorithms is evaluated using both simulation and in-vivo data. The results show improvements over previous approaches, while maintaining a relatively low computational complexity. Using new and improved estimation methods enables better tissue characterization and diagnosis. Furthermore, a sequence design problem is treated, where the radio-frequency excitation is optimized to minimize image artifacts when using amplifiers of limited quality. In turn, obtaining higher fidelity images enables improved diagnosis, and can increase the estimation accuracy in quantitative MRI.
APA, Harvard, Vancouver, ISO, and other styles
18

Ben, Khadhra Kais. "Surface Parameter Estimation using Bistatic Polarimetric X-band Measurements." Doctoral thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200801439.

Full text
Abstract:
To date only very few bistatic measurements (airborne or in controlled laboratories) have been reported. Therefore most of the current remote sensing methods are still focused on monostatic (backscatter) measurements. These methods, based on theoretical, empirical or semi-empirical models, enable the estimation of soil roughness and the soil humidity (dielectric constant). For the bistatic case only theoretical methods have been developed and tested with monostatic data. Hence, there still remains a vital need to gain of experience and knowledge about bistatic methods and data. The main purpose of this thesis is to estimate the soil moisture and the soil roughness by using full polarimetric bistatic measurements. In the experimental part, bistatic X-band measurements, which have been recorded in the Bistatic Measurement Facility (BMF) at the DLR Oberpfaffenhofen, Microwaves and Radar Institute, will be presented. The bistatic measurement sets are composed of soils with different statistical roughness and different moistures controlled by a TDR (Time Domain Reflectivity) system. The BMF has been calibrated using the Isolated Antenna Calibration Technique (IACT). The validation of the calibration was achieved by measuring the reflectivity of fresh water. In the second part, bistatic surface scattering analyses of the calibrated data set were discussed. Then, the specular algorithm was used to estimate the soil moisture of two surface roughnesses (rough and smooth) has been reported. A new technique using the coherent term of the Integral Equation Method (IEM) to estimate the soil roughness was presented. Also, the sensitivity of phase and reflectivity with regard to moisture variation in the specular direction was evaluated. Finally, the first results and validations of bistatic radar polarimetry for the specular case of surface scattering have been introduced
Aktuell sind nur sehr wenige Messungen mit bistatischem Radar durchgeführt worden, sei es von Flugzeuggetragenen Systemen oder durch spezielle Aufbauten im Labor. Deshalb basieren die meisten der bekannten Methoden zur Fernerkundung mit Radar auf monostatis-chen Messungen der Rückstreuung des Radarsignals. Diese Methoden, die auf theoretischen, empirischen oder halb-empirischen Modellen basieren, ermöglichen die Schätzung der Oberfächenrauhigkeit und die Bodenfeuchtigkeit (Dielektrizitätskonstante). Im bistatischen Fall wurden bisher nur theoretische Modelle entworfen, die mittels monostatischer Messungen getestet wurden. Aus diesem Grund ist es von großer Bedeutung, Erfahrung und Wissen über die physikalischen Effekte in bistatischen Konfigurationen zu sammeln. Das Hauptziel der vorliegenden Dissertation ist es, anhand vollpolarimetrischer, bistatischer Radarmessungen die Oberfächenrauhigkeit und Bodenfeuchtigkeit zu bestimmen. Im experimentellen Teil der Arbeit werden die Ergebnisse bistatischer Messungen präsentiert, die in der Bistatic Measurement Facility (BMF) des DLR Oberpfaffenhofen aufgenommen wurden. Die Datensätze umfassen Messungen von Böden unterschiedlicher statistischer Rauhigkeit und Feuchtigkeit, die mittels eines Time Domain Reflectivity (TDR) Systems bestimmt werden. Zur Kalibration des BMF wurde die Isolated Antenna Calibration Technique (IACT) verwendet und anhand der Messung der Reflektivität von Wasser überprüft. Im zweiten Teil der vorliegenden Arbeit wird anhand der kalibrierten Daten eine Analyse der Oberflächenstreuung in bistatischen Konfigurationen vorgenommen. Im Anschluss daran wird mittels des Specular Algorithm eine Schätzung der Bodenfeuchte zweier Proben unter- schiedlicher Rauhigkeit (rau und fein) durchgeführt. Ein neues Verfahren zur Schätzung der Oberfächenrauhigkeit, das auf dem kohärenten Term der Integral Equation Method (IEM) basiert, wurde eingeführt. Daneben wird die Empfindlichkeit der Phase sowie der Reflektivität des vorwärtsgestreuten Signals gegenüber Veränderungen der Bodenfeuchtigkeit analysiert. Schließlich werden erste Ergebnisse und Validierungen bistatischer Radarpolarimetrie für den Fall der Vorwärtsstreuung präsentiert
APA, Harvard, Vancouver, ISO, and other styles
19

Aubin, Benjamin. "Mean-field methods and algorithmic perspectives for high-dimensional machine learning." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP083.

Full text
Abstract:
À une époque où l'utilisation des données a atteint un niveau sans précédent, l'apprentissage machine, et plus particulièrement l'apprentissage profond basé sur des réseaux de neurones artificiels, a été responsable de très importants progrès pratiques. Leur utilisation est désormais omniprésente dans de nombreux domaines d'application, de la classification d'images à la reconnaissance vocale en passant par la prédiction de séries temporelles et l'analyse de texte. Pourtant, la compréhension de nombreux algorithmes utilisés en pratique est principalement empirique et leur comportement reste difficile à analyser. Ces lacunes théoriques soulèvent de nombreuses questions sur leur efficacité et leurs potentiels risques. Établir des fondements théoriques sur lesquels asseoir les observations numériques est devenu l'un des défis majeurs de la communauté scientifique.La principale difficulté qui se pose lors de l’analyse de la plupart des algorithmes d'apprentissage automatique est de traiter analytiquement et numériquement un grand nombre de variables aléatoires en interaction. Dans ce manuscrit, nous revisitons une approche basée sur les outils de la physique statistique des systèmes désordonnés. Développés au long d’une riche littérature, ils ont été précisément conçus pour décrire le comportement macroscopique d'un grand nombre de particules, à partir de leurs interactions microscopiques. Au cœur de ce travail, nous mettons fortement à profit le lien profond entre la méthode des répliques et les algorithmes de passage de messages pour mettre en lumière les diagrammes de phase de divers modèles théoriques, en portant l’accent sur les potentiels écarts entre seuils statistiques et algorithmiques. Nous nous concentrons essentiellement sur des tâches et données synthétiques générées dans le paradigme enseignant-élève. En particulier, nous appliquons ces méthodes à champ moyen à l'analyse Bayes-optimale des machines à comité, à l'analyse des bornes de généralisation de Rademacher pour les perceptrons, et à la minimisation du risque empirique dans le contexte des modèles linéaires généralisés. Enfin, nous développons un cadre pour analyser des modèles d'estimation avec des informations à priori structurées, produites par exemple par des réseaux de neurones génératifs avec des poids aléatoires
At a time when the use of data has reached an unprecedented level, machine learning, and more specifically deep learning based on artificial neural networks, has been responsible for very important practical advances. Their use is now ubiquitous in many fields of application, from image classification, text mining to speech recognition, including time series prediction and text analysis. However, the understanding of many algorithms used in practice is mainly empirical and their behavior remains difficult to analyze. These theoretical gaps raise many questions about their effectiveness and potential risks. Establishing theoretical foundations on which to base numerical observations has become one of the fundamental challenges of the scientific community. The main difficulty that arises in the analysis of most machine learning algorithms is to handle, analytically and numerically, a large number of interacting random variables. In this manuscript, we revisit an approach based on the tools of statistical physics of disordered systems. Developed through a rich literature, they have been precisely designed to infer the macroscopic behavior of a large number of particles from their microscopic interactions. At the heart of this work, we strongly capitalize on the deep connection between the replica method and message passing algorithms in order to shed light on the phase diagrams of various theoretical models, with an emphasis on the potential differences between statistical and algorithmic thresholds. We essentially focus on synthetic tasks and data generated in the teacher-student paradigm. In particular, we apply these mean-field methods to the Bayes-optimal analysis of committee machines, to the worst-case analysis of Rademacher generalization bounds for perceptrons, and to empirical risk minimization in the context of generalized linear models. Finally, we develop a framework to analyze estimation models with structured prior informations, produced for instance by deep neural networks based generative models with random weights
APA, Harvard, Vancouver, ISO, and other styles
20

Hugel, Thomas. "Estimations de satisfaisabilité." Phd thesis, Université Paris-Diderot - Paris VII, 2010. http://tel.archives-ouvertes.fr/tel-00582571.

Full text
Abstract:
Le problème de satisfaisabilité booléenne 3-SAT est connu pour présenter un phénomène de seuil en fonction du quotient entre le nombre de clauses et le nombre de variables. Nous donnons des estimations de la valeur de ce seuil au moyen de méthodes combinatoires et probabilistes: la méthode du premier moment et la méthode du second moment. Ces méthodes mettent en jeu des problèmes d'optimisation sous contraintes et nous amènent à employer de façon intensive la méthode des multiplicateurs de Lagrange. Nous mettons en œuvre une forme pondérée de la méthode du premier moment sur les affectations partielles valides de Maneva ainsi que des variantes. Cela nous conduit à élaborer une pondération générale pour les problèmes de satisfaction de contraintes qui soit compatible avec la méthode du premier moment. Cette pondération est constituée d'une graine et d'un répartiteur, et nous permet d'obtenir une pondération des affectations partielles valides meilleure que celle de Maneva. Nous comparons aussi dans certains cas les performances de la pondération et de l'orientation de l'espace des solutions des problèmes de satisfaction de contraintes relativement à la méthode du premier moment. Nous développons la première sélection non uniforme de solutions pour majorer le seuil de 3-SAT et nous montrons sa supériorité sur ses prédécesseurs. Nous construisons un cadre général pour appliquer la méthode du second moment à k-SAT et nous discutons des conditions qui la font fonctionner. Nous faisons notamment fonctionner la méthode du second moment sur les solutions booléennes et sur les impliquants. Nous étendons cela au modèle distributionnel de k-SAT.
APA, Harvard, Vancouver, ISO, and other styles
21

Ben, Khadhra Kais. "Surface Parameter Estimation using Bistatic Polarimetric X-band Measurements." Doctoral thesis, Deutsches Zentrum für Luft- und Raumfahrt e.V. in der Helmholtz-Gemeinschaft, 2007. https://monarch.qucosa.de/id/qucosa%3A18987.

Full text
Abstract:
To date only very few bistatic measurements (airborne or in controlled laboratories) have been reported. Therefore most of the current remote sensing methods are still focused on monostatic (backscatter) measurements. These methods, based on theoretical, empirical or semi-empirical models, enable the estimation of soil roughness and the soil humidity (dielectric constant). For the bistatic case only theoretical methods have been developed and tested with monostatic data. Hence, there still remains a vital need to gain of experience and knowledge about bistatic methods and data. The main purpose of this thesis is to estimate the soil moisture and the soil roughness by using full polarimetric bistatic measurements. In the experimental part, bistatic X-band measurements, which have been recorded in the Bistatic Measurement Facility (BMF) at the DLR Oberpfaffenhofen, Microwaves and Radar Institute, will be presented. The bistatic measurement sets are composed of soils with different statistical roughness and different moistures controlled by a TDR (Time Domain Reflectivity) system. The BMF has been calibrated using the Isolated Antenna Calibration Technique (IACT). The validation of the calibration was achieved by measuring the reflectivity of fresh water. In the second part, bistatic surface scattering analyses of the calibrated data set were discussed. Then, the specular algorithm was used to estimate the soil moisture of two surface roughnesses (rough and smooth) has been reported. A new technique using the coherent term of the Integral Equation Method (IEM) to estimate the soil roughness was presented. Also, the sensitivity of phase and reflectivity with regard to moisture variation in the specular direction was evaluated. Finally, the first results and validations of bistatic radar polarimetry for the specular case of surface scattering have been introduced.
Aktuell sind nur sehr wenige Messungen mit bistatischem Radar durchgeführt worden, sei es von Flugzeuggetragenen Systemen oder durch spezielle Aufbauten im Labor. Deshalb basieren die meisten der bekannten Methoden zur Fernerkundung mit Radar auf monostatis-chen Messungen der Rückstreuung des Radarsignals. Diese Methoden, die auf theoretischen, empirischen oder halb-empirischen Modellen basieren, ermöglichen die Schätzung der Oberfächenrauhigkeit und die Bodenfeuchtigkeit (Dielektrizitätskonstante). Im bistatischen Fall wurden bisher nur theoretische Modelle entworfen, die mittels monostatischer Messungen getestet wurden. Aus diesem Grund ist es von großer Bedeutung, Erfahrung und Wissen über die physikalischen Effekte in bistatischen Konfigurationen zu sammeln. Das Hauptziel der vorliegenden Dissertation ist es, anhand vollpolarimetrischer, bistatischer Radarmessungen die Oberfächenrauhigkeit und Bodenfeuchtigkeit zu bestimmen. Im experimentellen Teil der Arbeit werden die Ergebnisse bistatischer Messungen präsentiert, die in der Bistatic Measurement Facility (BMF) des DLR Oberpfaffenhofen aufgenommen wurden. Die Datensätze umfassen Messungen von Böden unterschiedlicher statistischer Rauhigkeit und Feuchtigkeit, die mittels eines Time Domain Reflectivity (TDR) Systems bestimmt werden. Zur Kalibration des BMF wurde die Isolated Antenna Calibration Technique (IACT) verwendet und anhand der Messung der Reflektivität von Wasser überprüft. Im zweiten Teil der vorliegenden Arbeit wird anhand der kalibrierten Daten eine Analyse der Oberflächenstreuung in bistatischen Konfigurationen vorgenommen. Im Anschluss daran wird mittels des Specular Algorithm eine Schätzung der Bodenfeuchte zweier Proben unter- schiedlicher Rauhigkeit (rau und fein) durchgeführt. Ein neues Verfahren zur Schätzung der Oberfächenrauhigkeit, das auf dem kohärenten Term der Integral Equation Method (IEM) basiert, wurde eingeführt. Daneben wird die Empfindlichkeit der Phase sowie der Reflektivität des vorwärtsgestreuten Signals gegenüber Veränderungen der Bodenfeuchtigkeit analysiert. Schließlich werden erste Ergebnisse und Validierungen bistatischer Radarpolarimetrie für den Fall der Vorwärtsstreuung präsentiert.
APA, Harvard, Vancouver, ISO, and other styles
22

El, Habib Ouali Mohammed. "Méthodes d'intégration des estimations multi-échelles de la disparité à partir de la phase du filtre de gabor algorithmes et évaluations." Thèse, Université de Sherbrooke, 2000. http://savoirs.usherbrooke.ca/handle/11143/5011.

Full text
Abstract:
Nous proposons une nouvelle méthodologie d'intégration multi-échelles des estimations de la disparité à partir des différences de phase. La carte de disparité obtenue à partir des différences de phase d'un seul filtre n'est pas dense. De plus, la disparité n'est pas fiable partout à cause de la présence des effets indésirables des singularités de la phase dans la réponse du filtre. Il est alors naturel d'utiliser d'autres échelles pour éviter ces singularités afin d'améliorer la densité et la précision des cartes de disparités. Pour remédier à ces problèmes, les algorithmes proposés dans la littérature utilisent des palliatifs: la disparité comme une moyenne pondérée des estimations multi-échelles, une stratégie"coarse-to-fine" ou bien un calcul itératif du type itérations de Newton pour affiner la solution. Comme ces palliatifs présentent des inconvénients, nous proposons d'utiliser des mécanismes explicites pour la sélection multi-échelles. Cette approche est en accord total avec les travaux récents de Koendrink et de Lindeberg. Dans notre cas, nous utilisons les modules de la réponse du filtre ainsi que la distance relative entre la fréquence instantanée observée et la fréquence centrale du filtre comme paramètres de fusion des disparités multi-échelles. Cependant, ceci suppose que nous disposons d'une estimation à toutes les échelles considérées. Pour satisfaire cette condition, nous proposons d'utiliser les dérivées du signal original.Nous observons que les singularités ne se trouvent pas aux mêmes positions selon que nous utilisons le signal ou sa dérivée d'ordre n.Nous proposons également un modèle quadratique pour la détermination des voisinages de singularités sans détecter les singularités. Finalement, nous présentons l'évaluation non-contextuelle et contextuelle des algorithmes, où nous montrons son importance, d'une part pour guider la recherche et d'autre part dans le transfert des algorithmes vers l'industrie.
APA, Harvard, Vancouver, ISO, and other styles
23

Ouali, Mohammed El Habib. "Méthodes d'intégration des estimations multi-échelles de la disparité à partir de la phase du filtre de Gabor : algorithmes et évaluations." ENSMP, 2000. http://www.theses.fr/2000ENMP0977.

Full text
Abstract:
Nous proposons une nouvelle méthodologie d’intégration multi-échelles des estimations de la disparité à partir des différences de phase. La carte de disparité obtenue à partir des différences de phase d'un seul filtre n'est pas dense. De plus, la disparité n'est pas fiable partout à cause de la présence des effets indésirables des singularités de la phase dans la réponse du filtre. Il est alors naturel d'utiliser d'autres échelles pour éviter ces singularités afin d’améliorer la densité et la précision des cartes de disparités. Pour remédier à ces problèmes, les algorithmes proposés dans la littérature utilisent des palliatifs : la disparité comme une moyenne pondérée des estimations multi-échelles, une stratégie coarse-to-fine ou bien un calcul itératif du type itérations de Newton pour affiner la solution. Comme ces palliatifs présentent des inconvénients, nous proposons d'utiliser des mécanismes explicites pour la sélection multi-échelles. Cette approche est en accord total avec les travaux récents de Koenderink et de Lindeberg. Dans notre cas, nous utilisons les modules de la réponse du filtre ainsi que la distance relative entre la fréquence instantanée observée et la fréquence centrale du filtre comme paramètres de fusion des disparités multi-échelles. Cependant, ceci suppose que nous disposons d'une estimation à toutes les échelles considérées. Pour satisfaire cette condition, nous proposons d'utiliser les dérivées du signal original. Nous observons que les singularités ne se trouvent pas aux mêmes positions selon que nous utilisons le signal ou sa dérivée d'ordre N. Nous proposons également un modèle quadratique pour la détermination des voisinages de singularités sans détecter les singularités. Finalement, nous présentons l’évaluation non-contextuelle et contextuelle des algorithmes, ou nous montrons son importance, d'une part pour guider la recherche et d'autre part dans le transfert des algorithmes vers l'industrie
We propose a new methodology of multiscale phase difference-based disparity integration. Usually, the disparity map obtained with unique filter exhibits a poor density. Moreover, the disparity s not reliable everywhere because of the presence of the unwanted phase singualarities effects. Its is then natural to use other filter tunings, e. G. Scales, to avoid these singularities in order to improve the density and the precision of the disparity maps. To overcome these problems, the algorithms proposed in the literature use palliatives : the disparity is computed as a weighted average of the multiscale estimates, a coarse-to-fine strategy or, finally, an iterative calculation. , like for example Newton iterations, to refine the solution. As these palliatives present drawabacks, we propose to use explicit mechanisms for the multiscale selection. This approach is in total agreement with recent works of Koenderink and Lindeberg. In our case, we use the magnitudes of the filter response as well as the relative distance between the instantaneous frequency observed and the filter peak frequency as fusion parameters of the multiscale estimates. However, this suppose that we have an estimate with all the scales considered. To satisfy this condition, we propose to use the derivative of the original signal. We observe that the singularities are not with the same positions according to wheter we use the signal or its derivative of order N. We also propose a quadratic model to determine the singularities neighborhoods without detecting these singularities. Finally, we present the non-contextual and contextual algorithms evaluation where we show its importance, on the one hand to guide search and on the other hand in the transfer of the algorithms towards industry
APA, Harvard, Vancouver, ISO, and other styles
24

Beaumont, Guillaume. "Traitements correctifs des effets de décohérence acoustique induits par les fluctuations du milieu de propagation : algorithmes d’estimation bayésienne des directions d’arrivée en milieu fluctuant." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0200.

Full text
Abstract:
L’objectif de ces travaux de thèse est de proposer des approches bayésiennes “physiquement informées” pour l’estimation de directions d’arrivée de sources acoustiques sous-marines dans un milieu océanique fluctuant. Dans un premier temps nous verrons donc comment décrire ces fluctuations comme perturbation locale de la célérité du milieu pour ensuite décrire leur impact sur la propagation d’une onde et la mesure de celle-ci. Ce faisant nous pourrons décrire la dégradation subie par le signal, motivant la création d’un modèle statistique pour décrire cette perturbation. Après un état de l’art sur les méthodes d’estimation des directions d’arrivées (DOA), tout d’abord en milieu constant, puis en milieu incertain, nous définirons une modélisation de ces fluctuations sous forme d’un bruit de phase structuré selon une distribution gaussienne multivariée. Nous verrons que ce choix nous rapproche du modèle théorique de structure défini dans le premier chapitre. Cette modélisation nous permet alors de proposer la méthode paSAMP, algorithme d’Approximate Message Passing, inspirée des méthodes de reconstruction de phase et se révélant plus robuste au bruit additif, ainsi qu’au bruit de phase structuré, que les méthodes d’estimation des DOA classiques. Dans un deuxième temps, nous proposerons de réviser notre modélisation statistique et de considérer un bruit de phase distribuée selon une loi de Von Mises multivariée. Cette distribution permet une modélisation plus fidèle des bruits de phase. Considérant ce nouveau modèle, nous dériverons deux nouvelles méthodes : VitAMin et VistaBEM, respectivement extensions de paSAMP et paVBEM, déjà présent dans la littérature et reposant sur une autre approximation variationnelle bayésienne. Les premiers résultats, réalisés sur des données synthétiques considérant un modèle plus simple de VonMises unidimensionnel, s’avèrent prometteurs quant à la bonne intégration des modèles considérés et ainsi à leur robustesse au bruit de phase distribué selon ce modèle. Pour finir nous verrons que par le traitement de données réelles, il nous est possible d’extraire les mêmes grandeurs caractéristiques que celles décrites par le modèle théorique, mais également d’identifier certaines ondes internes spécifiques. Ceci à des fins de caractérisation rapide des fluctuations du milieu par méthode passive ou encore pour une initialisation informée des algorithmes proposés
The purpose of these works is to propose new "physically informed" bayesian algorithms for DOA estimationin presence of random fluctuations due to the presence of internal waves in the propagation medium. First we will seehow to describe these fluctuations as a local perturbation of the celerity of the medium and then the impact of such waveson a propagated signal. In doing so, we are able to observe the corruption of this signal and to propose a statistical modelof this perturbation.After a state of the art of DOA estimation techniques in both constant an uncertain media, we will propose a modelizationof the fluctuations as a multiplicative phase noise following a multivariate gaussian distribution. Very close to thetheoretical model, this distribution allows us to develop the paSAMP algorithm, an Approximate Message Passingtechnique inspired form phase retrieval litterature which integrates this multiplicative phase noise model. In order to getcloser to the theoretical model, we propose to upgrade the phase noise prior using a multivariate Von Mises prior,allowing us to propose new bayesian methods for DOA estimation such as the VitAMin and the VistaBEM algorithms asextensions of resp. the paSAMP and the paVBEM algorithms. The first results on synthetic measurement considering aunivariate Von Mises phase noise seems are promising regarding the robustness of these algorithms to this new noisemodel.To conclude, we will see with the processing of real data from the ALMA campaign that it is possible to retrieve quantitiesfrom the theoretical model through signal statistics. Moreover, these quantities allow us to identify known phenomenon inthe propagation medium with only passive measurement. Such knowledge can then be use for tomography or as "smartinitialization" of the different algorithms
APA, Harvard, Vancouver, ISO, and other styles
25

Shu, Huaqiang. "Algorithmes de poursuite pour l'estimation de canal radio-mobile et performances asymptotiques : applications pour les systèmes OFDM." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00918418.

Full text
Abstract:
L'estimation de canal est une tâche cruciale du récepteur dans les systèmes de communication sans fil, en particulier en cas de mobilité où les paramètres du canal varient avec le temps. Dans cette thèse, un nouvel estimateur de boucle de poursuite d'ordre 3 (RW3-CATL), qui a une structure semblable à la boucle à verrouillage de phase (PLL) avec une faible complexité a été tout d'abord proposé pour estimer l'amplitude complexe du canal dans le cas mono-trajet mono-porteuse. Le lien entre un filtre de Kalman en régime asymptotique basé sur un modèle d'approximation de marche aléatoire (RW3-KF) et l'estimateur proposé est établi. Les expressions des paramètres sous-optimaux et d'EQM correspondante sont données sous forme analytiques en fonction des gains de boucle. Ensuite, les performances asymptotiques du RW3-KF ont été analysées en résolvant les équations de Riccati. L'expression analytique de la variance optimale du bruit d'état qui minimise l'EQM asymptotique a été également déduite. Pour les systèmes multi-trajet multi-porteuses, tels que les systèmes OFDM, la boucle RW3-CATL est étendue à la structure vectorielle (RW3-LS-CATL). Cette boucle conserve la même structure qu'en mono-trajet mono-porteuse, mais un signal d'erreur vectoriel adéquat est redéfini à partir d'un estimateur LS des amplitudes complexes des trajets, basé sur les pilotes du symbole OFDM courant et sur la connaissance a priori des retards des trajets. Par ailleurs, pour ce scénario, le filtre de Kalman, RW-KF estime conjointement les amplitudes complexes des trajets. Il présente une forte complexité, et nous n'avons pas de formules analytiques pour le régler. Pour réduire la complexité, nous proposons un filtre de dimension réduite, obtenu par une approximation du filtre original à l'aide des identités de Woodbury. Ce filtre revient à estimer l'amplitude complexe de chaque trajet de manière séparée. Le réglage optimal des paramètres est déduit du cas mono-trajet mono-porteuse. Les performances de ce filtre à complexité réduite, obtenues par simulation, sont quasiment les mêmes que celles du filtre de Kalman original.
APA, Harvard, Vancouver, ISO, and other styles
26

Buhot, Arnaud. "Etude de propriétés d'apprentissage supervisé et non supervisé par des méthodes de Physique Statistique." Phd thesis, Université Joseph Fourier (Grenoble), 1999. http://tel.archives-ouvertes.fr/tel-00001642.

Full text
Abstract:
L'objet de cette thèse est l'étude de diverses propriétés d'apprentissage à partir d'exemples par des méthodes de Physique Statistique, notamment, par la méthode des répliques. Des tâches supervisées, correspondant à la classification binaire de données, ainsi que des tâches non supervisées, comme l'estimation paramétrique d'une densité de probabilité, sont considérées. Dans la première partie, une approche variationnelle permet de déterminer la performance de l'apprentissage optimal d'une direction d'anisotropie, et de déduire une fonction de coût permettant d'obtenir ces performances optimales. Dans le cas de l'apprentissage supervisé d'une tâche linéairement séparable, des simulations numériques confirmant nos résultats théoriques ont permis de déterminer les effets de taille finie. Dans le cas d'une densité de probabilité constituée de deux gaussiennes, la performance de l'apprentissage optimal présente de nombreuses transitions de phases en fonction du nombre de données. Ces résultats soulèvent une controverse entre la théorie variationnelle et l'approche bayesienne de l'apprentissage optimal. Dans la deuxième partie, nous étudions deux approches différentes de l'apprentissage de tâches de classification complexes. La première approche considérée est celle des machines à exemples supports. Nous avons étudié une famille de ces machines pour laquelle les séparateurs linéaire et quadratique sont deux cas particuliers. La capacité, les valeurs typiques de la marge et du nombre d'exemples supports, sont déterminées. La deuxième approche considérée est celle d'une machine de parité apprenant avec un algorithme incrémental. Cet algorithme construit progressivement un réseau de neurones à une couche cachée. La capacité théorique obtenue pour l'algorithme considéré est proche de celle de la machine de parité.
APA, Harvard, Vancouver, ISO, and other styles
27

Weber, Loriane. "Iterative tomographic X-Ray phase reconstruction." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI085/document.

Full text
Abstract:
L’imagerie par contraste de phase suscite un intérêt croissant dans le domaine biomédical, puisqu’il offre un contraste amélioré par rapport à l’imagerie d’atténuation conventionnelle. En effet, le décalage en phase induit par les tissus mous, dans la gamme d’énergie utilisée en imagerie, est environ mille fois plus important que leur atténuation. Le contraste de phase peut être obtenu, entre autres, en laissant un faisceau de rayons X cohérent se propager librement après avoir traversé un échantillon. Dans ce cas, les signaux obtenus peuvent être modélisés par la diffraction de Fresnel. Le défi de l’imagerie de phase quantitative est de retrouver l’atténuation et l’information de phase de l’objet observé, à partir des motifs diffractés enregistrés à une ou plusieurs distances. Ces deux quantités d’atténuation et de phase, sont entremêlées de manière non-linéaire dans le signal acquis. Dans ces travaux, nous considérons les développements et les applications de la micro- et nanotomographie de phase. D’abord, nous nous sommes intéressés à la reconstruction quantitative de biomatériaux à partir d’une acquisition multi-distance. L’estimation de la phase a été effectuée via une approche mixte, basée sur la linéarisation du modèle de contraste. Elle a été suivie d’une étape de reconstruction tomographique. Nous avons automatisé le processus de reconstruction de phase, permettant ainsi l’analyse d’un grand nombre d’échantillons. Cette méthode a été utilisée pour étudier l’influence de différentes cellules osseuses sur la croissance de l’os. Ensuite, des échantillons d’os humains ont été observés en nanotomographie de phase. Nous avons montré le potentiel d’une telle technique sur l’observation et l’analyse du réseau lacuno-canaliculaire de l’os. Nous avons appliqué des outils existants pour caractériser de manière plus approfondie la minéralisation et les l’orientation des fibres de collagènes de certains échantillons. L’estimation de phase, est, néanmoins, un problème inverse mal posé. Il n’existe pas de méthode de reconstruction générale. Les méthodes existantes sont soit sensibles au bruit basse fréquence, soit exigent des conditions strictes sur l’objet observé. Ainsi, nous considérons le problème inverse joint, qui combine l’estimation de phase et la reconstruction tomographique en une seule étape. Nous avons proposé des algorithmes itératifs innovants qui couplent ces deux étapes dans une seule boucle régularisée. Nous avons considéré un modèle de contraste linéarisé, couplé à un algorithme algébrique de reconstruction tomographique. Ces algorithmes sont testés sur des données simulées
Phase contrast imaging has been of growing interest in the biomedical field, since it provides an enhanced contrast compared to attenuation-based imaging. Actually, the phase shift of the incoming X-ray beam induced by an object can be up to three orders of magnitude higher than its attenuation, particularly for soft tissues in the imaging energy range. Phase contrast can be, among others existing techniques, achieved by letting a coherent X-ray beam freely propagate after the sample. In this case, the obtained and recorded signals can be modeled as Fresnel diffraction patterns. The challenge of quantitative phase imaging is to retrieve, from these diffraction patterns, both the attenuation and the phase information of the imaged object, quantities that are non-linearly entangled in the recorded signal. In this work we consider developments and applications of X-ray phase micro and nano-CT. First, we investigated the reconstruction of seeded bone scaffolds using sed multiple distance phase acquisitions. Phase retrieval is here performed using the mixed approach, based on a linearization of the contrast model, and followed by filtered-back projection. We implemented an automatic version of the phase reconstruction process, to allow for the reconstruction of large sets of samples. The method was applied to bone scaffold data in order to study the influence of different bone cells cultures on bone formation. Then, human bone samples were imaged using phase nano-CT, and the potential of phase nano-imaging to analyze the morphology of the lacuno-canalicular network is shown. We applied existing tools to further characterize the mineralization and the collagen orientation of these samples. Phase retrieval, however, is an ill-posed inverse problem. A general reconstruction method does not exist. Existing methods are either sensitive to low frequency noise, or put stringent requirements on the imaged object. Therefore, we considered the joint inverse problem of combining both phase retrieval and tomographic reconstruction. We proposed an innovative algorithm for this problem, which combines phase retrieval and tomographic reconstruction into a single iterative regularized loop, where a linear phase contrast model is coupled with an algebraic tomographic reconstruction algorithm. This algorithm is applied to numerical simulated data
APA, Harvard, Vancouver, ISO, and other styles
28

Segkos, Michail. "Advanced techniques to improve the performance of OFDM Wireless LAN." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FSegkos.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Applied Physics)--Naval Postgraduate School, June 2004.
Thesis advisor(s): Tri T. Ha, Brett H. Borden. Includes bibliographical references (p. 107-109). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
29

Frugier, Pierre Antoine. "Quantification 3D d’une surface dynamique par lumière structurée en impulsion nanoseconde. Application à la physique des chocs, du millimètre au décimètre." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112129.

Full text
Abstract:
La technique de reconstruction de forme par lumière structurée (ou projection de motifs) permet d’acquérir la topographie d’une surface objet avec une précision et un échantillonnage de points dense, de manière strictement non invasive. Pour ces raisons, elle fait depuis plusieurs années l’objet d’un fort intérêt. Les travaux présentés ici ont pour objectif d’adapter cette technique aux conditions sévères des expériences de physique des chocs : aspect monocoup, grande brièveté des phénomènes, diversité des échelles d’observation (de quelques millimètres au décimètre). Pour répondre à ces exigences, nous proposons de réaliser un dispositif autour d’un système d’imagerie rapide par éclairage laser nanoseconde, présentant des performances éprouvées et bien adaptées. La première partie des travaux s’intéresse à analyser les phénomènes prépondérants pour la qualité des images. Nous montrons quels sont les contributeurs principaux à la dégradation des signaux, et une technique efficace de lissage du speckle par fibrage est présentée. La deuxième partie donne une formulation projective de la reconstruction de forme ; celle-ci est rigoureuse, ne nécessitant pas de travailler dans l’approximation de faible perspective, ou de contraindre la géométrie de l’instrument. Un protocole d’étalonnage étendant la technique DLT (Direct Linear Transformation) aux systèmes à lumière structurée est proposé. Le modèle permet aussi, pour une expérience donnée, de prédire les performances de l’instrument par l’évaluation a priori des incertitudes de reconstruction. Nous montrons comment elles dépendent des paramètres du positionnement des sous-ensembles et de la forme-même de l’objet. Une démarche d’optimisation de la configuration de l’instrument pour une reconstruction donnée est introduite. La profondeur de champ limitant le champ objet minimal observable, la troisième partie propose de l’étendre par codage pupillaire : une démarche de conception originale est exposée. L’optimisation des composants est réalisée par algorithme génétique, sur la base de critères et de métriques définis dans l’espace de Fourier. Afin d’illustrer les performances de cette approche, un masque binaire annulaire a été conçu, réalisé et testé expérimentalement. Il corrige des défauts de mise au point très significatifs (Ψ≥±40 radians) sans impératif de filtrage de l’image. Nous montrons aussi que ce procédé donne accès à des composants tolérant des défauts de mise au point extrêmes (Ψ≈±100 radians , après filtrage). La dernière partie présente une validation expérimentale de l’instrument dans différents régimes, et à différentes échelles. Il a notamment été mis en œuvre sur l’installation LULI2000, où il a permis de mesurer dynamiquement la déformation et la fragmentation d’un matériau à base de carbone (champs millimétriques). Nous présentons également les mesures obtenues sous sollicitation pyrotechnique sur un revêtement de cuivre cylindrique de dimensions décimétriques. L’apparition et la croissance rapide de déformations radiales submillimétriques est mesurée à la surface du revêtement
A Structured Light System (SLS) is an efficient means to measure a surface topography, as it features both high accuracy and dense spatial sampling in a strict non-invasive way. For these reasons, it became in the past years a technique of reference. The aim of the PhD is to bring this technique to the field of shock physics. Experiments involving shocks are indeed very specific: they only allow single-shot acquisition of extremely short phenomena occurring under a large range of spatial extensions (from a few mm to decimeters). In order to address these difficulties, we have envisioned the use of a well-known high-speed technique: pulsed laser illumination. The first part of the work deals with the evaluation of the key-parameters that have to be taken into account if one wants to get sharp acquisitions. The extensive study demonstrates that speckle effect and depth of field limitation are of particular importance. In this part, we provide an effective way to smooth speckle in nanosecond regime, leaving 14% of residual contrast. Second part introduces an original projective formulation for object-points reconstruction. This geometric approach is rigorous; it doesn’t involve any weak-perspective assumptions or geometric constraints (like camera-projector crossing of optical axis in object space). From this formulation, a calibration procedure is derived; we demonstrate that calibrating any structured-light system can be done by extending the Direct Linear Transformation (DLT) photogrammetric approach to SLS. Finally, we demonstrate that reconstruction uncertainties can be derived from the proposed model in an a priori manner; the accuracy of the reconstruction depends both on the configuration of the instrument and on the object shape itself. We finally introduce a procedure for optimizing the configuration of the instrument in order to lower the uncertainties for a given object. Since depth of field puts a limitation on the lowest measurable field extension, the third part focuses on extending it through pupil coding. We present an original way of designing phase components, based on criteria and metrics defined in Fourier space. The design of a binary annular phase mask is exhibited theoretically and experimentally. This one tolerates a defocus as high as Ψ≥±40 radians, without the need for image processing. We also demonstrate that masks designed with our method can restore extremely high defoci (Ψ≈±100 radians) after processing, hence extending depth of focus by amounts unseen yet. Finally, the fourth part exhibits experimental measurements obtained with the setup in different high-speed regimes and for different scales. It was embedded on LULI2000 high energy laser facility, and allowed measurements of the deformation and dynamic fragmentation of a sample of carbon. Finally, sub-millimetric deformations measured in ultra-high speed regime, on a cylinder of copper under pyrotechnic solicitation are presented
APA, Harvard, Vancouver, ISO, and other styles
30

Kumar, V. Seshadri Sravan. "Development Of Algorithms For Power System State Estimation Incorporating Synchronized Phasor Measurements." Thesis, 2011. http://hdl.handle.net/2005/2113.

Full text
Abstract:
The ability to implement Wide Area Monitoring and Control in power systems is developing into a need in order to prevent wide scale cascading outages. Monitoring of events in the power system provides a great deal of insight into the behaviour of the system. The research work presented in this thesis focussed on two tools that aid in monitoring: State Estimation and Synchronised Phasors provided by Phasor Measurement Units (PMU). State Estimation is essentially an on-line data processing scheme used to estimate the best possible state (i.e. voltage phasors) from a monitored set of measurements (active and reactive powers/voltage phasor measurements). The ever growing complexity and developments in the state of art calls for robust state estimators that converge accurately and rapidly. Newton’s method forms the basis for most of the solution approaches. For real-time application in modern power systems, the existing Newton-based state estimation algorithms are too fragile numerically. It is known that Newton’s algorithm may fail to converge if the initial nominal point is far from the optimal point. Sometimes Newton’s algorithm can converge to a local minima. Also Newton’s step can fail to be a descent direction if the gain matrix is nearly singular or ill-conditioned. This thesis proposes a new and more robust method that is based on linear programming and trust region techniques. The proposed formulation is suitable for Upper Bound Linear Programming. The formulation is first introduced and its convergence characteristics with the use of Upper Bound Linear Programming is studied. In the subsequent part, the solution to the same formulation is obtained using trust region algorithms. Proposed algorithms have been tested and compared with well known methods. The trust region method-based state estimator is found to be more reliable. This enhanced reliability justifies the additional time and computational effort required for its execution. One of the key elements in the synchrophasor based wide area monitoring is the Phasor Measurement Unit. Synchronized, real time, voltage phasor angle, phasor measurements over a distributed power network presents an excellent opportunity for major improvements in power system control and protection. Two of the most significant applications include state estimation and instability prediction. In recent years, there has been a significant research activity on the problem of finding the suitable number of PMUs and their optimal locations. For State Estimation, such procedures, which basically ensure observability based on network topology, are sufficient. However for instability prediction, it is very essential that the PMUs are located such that important/vulnerable buses are also directly monitored. In this thesis a method for optimal placement of PMUs, considering the vulnerable buses is developed. This method serves two purposes viz., identifying optimal locations for PMU (planning stage), and identifying the set PMUs to be closely monitored for instability prediction. The major issue is to identify the key buses when the angular and voltage stability prediction is taken into account. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set. Further, various aspects of including the Phasor Measurements in state estimation algorithms are addressed. Studies are carried out on various sample test systems, an IEEE 30-bus system and real life Indian southern grid equivalents of 24-bus system, 72-bus system and 205-bus system.
APA, Harvard, Vancouver, ISO, and other styles
31

Ren, Jinfeng. "Synchrophasor Measurement Using Substation Intelligent Electronic Devices: Algorithms and Test Methodology." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10388.

Full text
Abstract:
This dissertation studies the performance of synchrophasor measurement obtained using substation Intelligent Electronic Devices (IEDs) and proposes new algorithms and test methodology to improve and verify their performance when used in power system applications. To improve the dynamic performance when exposed to sinusoidal waveform distortions, such as modulation, frequency drift, abrupt change in magnitude, etc, an adaptive approach for accurately estimating phasors while eliminating the effect of various transient disturbances on voltages and currents is proposed. The algorithm pre-analyzes the waveform spanning the window of observation to identify and localize the discontinuities which affect the accuracy of phasor computation. A quadratic polynomial signal model is used to improve the accuracy of phasor estimates during power oscillations. Extensive experimental results demonstrate the advantages. This algorithm can also be used as reference algorithm for testing the performance of the devices extracting synchronized phasor measurements. A novel approach for estimating the phasor parameters, namely frequency, magnitude and angle in real time based on a newly constructed recursive wavelet transform is developed. This algorithm is capable of estimating the phasor parameters in a quarter cycle of an input signal. It features fast response and achieves high accuracy over a wide range of frequency deviations. The signal sampling rate and data window size can be selected to meet desirable application requirements, such as fast response, high accuracy and low computational burden. In addition, an approach for eliminating a decaying DC component, which has significant impact on estimating phasors, is proposed using recursive wavelet transform. This dissertation develops test methodology and tools for evaluating the conformance to standard-define performance for synchrophasor measurements. An interleaving technique applied on output phasors can equivalently increase the reporting rate and can precisely depict the transient behavior of a synchrophasor unit under the step input. A reference phasor estimator is developed and implemented. Various types of Phasor Measurement Units (PMUs) and PMU-enabled IEDs (Intelligent Electronic Devices) and time synchronization options have been tested against the standards using the proposed algorithm. Test results demonstrate the effectiveness and advantages.
APA, Harvard, Vancouver, ISO, and other styles
32

Rice, Feng. "Bounds and algorithms for carrier frequency and phase estimation." 2002. http://arrow.unisa.edu.au:8081/1959.8/24972.

Full text
Abstract:
Quadrature amplitude modulation (QAM) is a highly bandwidth efficient transmission technique for digital communications. It makes use of multiple signal phase and amplitude levels to carry multiple bits per symbol. This requires accurate and robust carrier phase and frequency estimation in the receiver.
Thesis (PhDElectronicEngineering)--University of South Australia, 2002
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Shan-Ren, and 楊善任. "An EM Algorithm for Phase Noise Removal and Channel Estimation in OFDM Systems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/69143696281664688773.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所
98
The common phase error (CPE) and the intercarrier interference (ICI) due to the phase noise arisen in an orthogonal frequency division multiplexing (OFDM) system can significantly reduce system performance. In contrast to previous proposals in the literature, we study the maximum-expected (EM) algorithm to obtain more accurate estimate of the phase noise and to suppress the resulting ICI. To consider the application in a time-varying channel environment, the channel estimation method is implemented by the combination of the EM algorithm and a decision feedback method to take advantage of both merits in order to prevent from the influence of decision errors at low SNR. From performance analysis and simulations of SINR and BER for the proposed algorithm, the simulated SINR results are close to the analyzed ones and the BER performance has better result than the previous work.
APA, Harvard, Vancouver, ISO, and other styles
34

Rodrigo, Navarro Jaime. "Phase Noise Tolerant Modulation Formats and DSP Algorithms for Coherent Optical Systems." Doctoral thesis, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207034.

Full text
Abstract:
Coherent detection together with multilevel modulation formats has the potential to significantly increase the capacity of existing optical communication systems at no extra cost in signal bandwidth. However, these modulation formats are more susceptible to the impact of different noise sources and distortions as the distance between its constellation points in the complex plane reduces with the modulation index. In this context, digital signal processing (DSP) plays a key role as it allows compensating for the impairments occurring during signal generation, transmission and/or detection relaxing the complexity of the overall system. The transition towards pluggable optical transceivers, offers flexibility for network design/upgrade but sets strict requirements on the power consumption of the DSP thus limiting its complexity. The DSP module complexity however, scales with the modulation order and, in this scenario, low complex yet high performance DSP algorithms are highly desired. In this thesis, we mainly focus on the impact of laser phase noise arising from the transmitter and local oscillator (LO) lasers in coherent optical communication systems employing high order modulation formats. In these systems, the phase noise of the transmitting and LO lasers translate into phase noise in the received constellation impeding the proper recovery of the transmitted data. In order to increase the system phase noise tolerance, we firstly explore the possibility of re-arranging the constellation points in a circularly shaped mQAM (C-mQAM) constellation shape to exploit its inherent phase noise tolerance. Different low-complex carrier phase recovery (CPR) schemes applicable to these constellations are proposed along with a discussion on its performance and implementation complexity. Secondly, the design guidelines of high performance and low complex CPR schemes for conventional square mQAM constellations are presented. We identify the inherent limitation of the state-of-the-art blind phase search (BPS) carrier phase recovery algorithm which hinders its achievable performance and implementation complexity and present a low complex solution to overcome it. The design guidelines of multi-stage CPR schemes for high order modulation formats, where the BPS algorithm is employed at any of the stages, are also provided and discussed. Finally, the interplay between the received dispersed signal and the LO phase noise is analytically investigated to characterize the origin of the equalization enhanced phase noise phenomena.

QC 20170516


EU project ICONE, gr. #608099
APA, Harvard, Vancouver, ISO, and other styles
35

Višňák, Jakub. "Kvantově chemické algoritmy pro kvantové počítače." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-310395.

Full text
Abstract:
Title: Quantum computing algorithms for quantum chemistry Author: Jakub Višňák Abstract: The topic of this study is the simulation of the quantum algorithm for the diagonalization of the matrix representation of the all-electron Dirac-Coulomb hamiltonian of the SbH molecule. Two different limited CI expansions were used to describe both the ground state (X 0+ ) and the first excited doublet (A 1) by simulating the Iterative Phase Estinamtion Algorith (IPEA). In the simulations numerically performed in this work, the "compact mapping" has been employed for the representation of the evolution operator exp(i Hˆ t); in the theoretical part of the work, the "direct mapping" is described as well. The influence of the metodics for choosing the initial eigenvector estimate is studied in both IPEA A and IPEA B variants. For those variants, the success probabilities pm are computed for different single-points on the SbH dissociation curves. The initial eigenvector estimates based on the "CISD(2)" method are found to be sufficient for both studied LCI-expansions up to internuclear distance R  6 a0. The pm dependence on the overlap between the eigenvector in question and its inital estimate - 2 0  is studied the for IPEA B method. The usability of the both variants of the IPEA in possible later calculations is...
APA, Harvard, Vancouver, ISO, and other styles
36

LIN, ZHONG-SHENG, and 林中聖. "An algorithm to the regression quantile and its application to the estimation of continuous two-phase regression model." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/73143005336508395395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Taati, BABAK. "Generation and Optimization of Local Shape Descriptors for Point Matching in 3-D Surfaces." Thesis, 2009. http://hdl.handle.net/1974/5107.

Full text
Abstract:
We formulate Local Shape Descriptor selection for model-based object recognition in range data as an optimization problem and offer a platform that facilitates a solution. The goal of object recognition is to identify and localize objects of interest in an image. Recognition is often performed in three phases: point matching, where correspondences are established between points on the 3-D surfaces of the models and the range image; hypothesis generation, where rough alignments are found between the image and the visible models; and pose refinement, where the accuracy of the initial alignments is improved. The overall efficiency and reliability of a recognition system is highly influenced by the effectiveness of the point matching phase. Local Shape Descriptors are used for establishing point correspondences by way of encapsulating local shape, such that similarity between two descriptors indicates geometric similarity between their respective neighbourhoods. We present a generalized platform for constructing local shape descriptors that subsumes a large class of existing methods and allows for tuning descriptors to the geometry of specific models and to sensor characteristics. Our descriptors, termed as Variable-Dimensional Local Shape Descriptors, are constructed as multivariate observations of several local properties and are represented as histograms. The optimal set of properties, which maximizes the performance of a recognition system, depend on the geometry of the objects of interest and the noise characteristics of range image acquisition devices and is selected through pre-processing the models and sample training images. Experimental analysis confirms the superiority of optimized descriptors over generic ones in recognition tasks in LIDAR and dense stereo range images.
Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2009-09-01 11:07:32.084
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography