Dissertations / Theses on the topic 'Reconstruction du signal'

To see the other types of publications on this topic, follow the link: Reconstruction du signal.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction du signal.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Serdaroglu, Bulent. "Signal Reconstruction From Nonuniform Samples." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605850/index.pdf.

Full text
Abstract:
Sampling and reconstruction is used as a fundamental signal processing operation since the history of signal theory. Classically uniform sampling is treated so that the resulting mathematics is simple. However there are various instances that nonuniform sampling and reconstruction of signals from their nonuniform samples are required. There exist two broad classes of reconstruction methods. They are the reconstruction according to a deterministic, and according to a stochastic model. In this thesis, the most fundamental aspects of nonuniform sampling and reconstruction, according to a deterministic model, is analyzed, implemented and tested by considering specific nonuniform reconstruction algorithms. Accuracy of reconstruction, computational efficiency and noise stability are the three criteria that nonuniform reconstruction algorithms are tested for. Specifically, four classical closed form interpolation algorithms proposed by Yen are discussed and implemented. These algorithms are tested, according to the proposed criteria, in a variety of conditions in order to identify their performances for reconstruction quality and robustness to noise and signal conditions. Furthermore, a filter bank approach is discussed for the interpolation from nonuniform samples in a computationally efficient manner. This approach is implemented and the efficiency as well as resulting filter characteristics is observed. In addition to Yen'
s classical algorithms, a trade off algorithm, which claims to find an optimal balance between reconstruction accuracy and noise stability is analyzed and simulated for comparison between all discussed interpolators. At the end of the stability tests, Yen'
s third algorithm, known as the classical recurrent nonuniform sampling, is found to be superior over the remaining interpolators, from both an accuracy and stability point of view.
APA, Harvard, Vancouver, ISO, and other styles
2

Neuman, Bartosz P. "Signal processing in diffusion MRI : high quality signal reconstruction." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/27691/.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is a medical imaging technique which is especially sensitive to different soft tissues, producing a good contrast between them. It allows for in vivo visualisation of internal structures in detail and became an indispensable tool in diagnosing and monitoring the brain related diseases and pathologies. Amongst others, MRI can be used to measure random incoherent motion of water molecules, which in turn allows to infer structural information. One of the main challenges in processing and analysing four dimensional diffusion MRI images is low signal quality. To improve the signal quality, either denoising algorithm or angular and spatial regularisations are utilised. Regularisation method based on Laplace--Beltrami smoothing operator was successfully applied to diffusion signal. In this thesis, a new regularisation strength selection scheme for diffusion signal regularisation is introduced. A mathematical model of diffusion signal is used in Monte--Carlo simulations, and a regularisation strength that optimally reconstructs the diffusion signal is sought. The regularisation values found in this research show a different trend than the currently used L-curve analysis, and further improve reconstruction accuracy. Additionally, as an alternative to regularisation methods a backward elimination regression for spherical harmonics is proposed. Instead of using the regularisation term as a low-pass filter, the statistical t-test is classifying regression terms into reliable and corrupted. Four algorithms that use this information are further introduced. As the result, a selective filtering is constructed that retains the angular sharpness of the signal, while at the same time reducing corruptive effect of measurement noise. Finally, a statistical approach for estimating diffusion signal is investigated. Based on the physical properties of water diffusion a prior knowledge for the diffusion signal is constructed. The spherical harmonic transform is then formulated as a Bayesian regression problem. Diffusion signal reconstructed with the addition of such prior knowledge is accurate, noise resilient, and of high quality.
APA, Harvard, Vancouver, ISO, and other styles
3

Moose, Phillip J. "Approximate signal reconstruction from partial information." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063326/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Scoular, Spencer Charles. "Sampling and reconstruction of one-dimensional analogue signals." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pillai, Anu Kalidas Muralidharan. "Signal Reconstruction Algorithms for Time-Interleaved ADCs." Doctoral thesis, Linköpings universitet, Kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117826.

Full text
Abstract:
An analog-to-digital converter (ADC) is a key component in many electronic systems. It is used to convert analog signals to the equivalent digital form. The conversion involves sampling which is the process of converting a continuous-time signal to a sequence of discrete-time samples, and quantization in which each sampled value is represented using a finite number of bits. The sampling rate and the effective resolution (number of bits) are two key ADC performance metrics. Today, ADCs form a major bottleneck in many applications like communication systems since it is difficult to simultaneously achieve high sampling rate and high resolution. Among the various ADC architectures, the time-interleaved analog-to-digital converter (TI-ADC) has emerged as a popular choice for achieving very high sampling rates and resolutions. At the principle level, by interleaving the outputs of M identical channel ADCs, a TI-ADC could achieve the same resolution as that of a channel ADC but with M times higher bandwidth. However, in practice, mismatches between the channel ADCs result in a nonuniformly sampled signal at the output of a TI-ADC which reduces the achievable resolution. Often, in TIADC implementations, digital reconstructors are used to recover the uniform-grid samples from the nonuniformly sampled signal at the output of the TI-ADC. Since such reconstructors operate at the TI-ADC output rate, reducing the number of computations required per corrected output sample helps to reduce the power consumed by the TI-ADC. Also, as the mismatch parameters change occasionally, the reconstructor should support online reconfiguration with minimal or no redesign. Further, it is advantageous to have reconstruction schemes that require fewer coefficient updates during reconfiguration. In this thesis, we focus on reducing the design and implementation complexities of nonrecursive finite-length impulse response (FIR) reconstructors. We propose efficient reconstruction schemes for three classes of nonuniformly sampled signals that can occur at the output of TI-ADCs. Firstly, we consider a class of nonuniformly sampled signals that occur as a result of static timing mismatch errors or due to channel mismatches in TI-ADCs. For this type of nonuniformly sampled signals, we propose three reconstructors which utilize a two-rate approach to derive the corresponding single-rate structure. The two-rate based reconstructors move part of the complexity to a symmetric filter and also simplifies the reconstruction problem. The complexity reduction stems from the fact that half of the impulse response coefficients of the symmetric filter are equal to zero and that, compared to the original reconstruction problem, the simplified problem requires only a simpler reconstructor. Next, we consider the class of nonuniformly sampled signals that occur when a TI-ADC is used for sub-Nyquist cyclic nonuniform sampling (CNUS) of sparse multi-band signals. Sub-Nyquist sampling utilizes the sparsities in the analog signal to sample the signal at a lower rate. However, the reduced sampling rate comes at the cost of additional digital signal processing that is needed to reconstruct the uniform-grid sequence from the sub-Nyquist sampled sequence obtained via CNUS. The existing reconstruction scheme is computationally intensive and time consuming and offsets the gains obtained from the reduced sampling rate. Also, in applications where the band locations of the sparse multi-band signal can change from time to time, the reconstructor should support online reconfigurability. Here, we propose a reconstruction scheme that reduces the computational complexity of the reconstructor and at the same time, simplifies the online reconfigurability of the reconstructor. Finally, we consider a class of nonuniformly sampled signals which occur at the output of TI-ADCs that use some of the input sampling instants for sampling a known calibration signal. The samples corresponding to the calibration signal are used for estimating the channel mismatch parameters. In such TI-ADCs, nonuniform sampling is due to the mismatches between the channel ADCs and due to the missing input samples corresponding to the sampling instants reserved for the calibration signal. We propose three reconstruction schemes for such nonuniformly sampled signals and show using design examples that, compared to a previous solution, the proposed schemes require substantially lower computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Fuller, Megan M. (Megan Marie). "Inverse filtering by signal reconstruction from phase." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89858.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
14
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
A common problem that arises in image processing is that of performing inverse filtering on an image that has been blurred. Methods for doing this have been developed, but require fairly accurate knowledge of the magnitude of the Fourier transform of the blurring function and are sensitive to noise in the blurred image. It is known that a typical image is defined completely by its region of support and a sufficient number of samples of the phase of its Fourier transform. We will investigate a new method of deblurring images based only on phase data. It will be shown that this method is much more robust in the presence of noise than existing methods and that, because no magnitude information is required, it is also more robust to an incorrect guess of the blurring filter. Methods of finding the region of support of the image will also be explored.
by Megan M. Fuller.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Siuling. "Signal reconstruction from discrete-time Wigner distribution." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41550.

Full text
Abstract:

Wigner distribution is considered to be one of the most powerful tools for time-frequency analysis of rumvstationary signals. Wigner distribution is a bilinear signal transformation which provides two dimensional time-frequency characterization of one dimensional signals. Although much work has been done recently in signal analysis and applications using Wigner distribution, not many synthesis methods for Wigner distribution have been reported in the literature.

This thesis is concerned with signal synthesis from discrete-time Wigner distribution and from discrete-time pseudo-Wigner distribution and their applications in noise filtering and signal separation. Various algorithms are developed to reconstruct signals from the modified or specified Wigner distribution and pseudo-Wigner distribution which generally do not have a valid Wigner distributions or valid pseudo-Wigner distribution structures. These algorithms are successfully applied to the noise filtering and signal separation problems.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Santos, Dorabella Martins da Silva. "Signal reconstruction in structures with two channels." Doctoral thesis, Universidade de Aveiro, 2007. http://hdl.handle.net/10773/2211.

Full text
Abstract:
Doutoramento em Engenharia Electrotécnica
Em sistemas ATM e transmissões em tempo real através de redes IP, os dados são transmitidos em pacotes de informação. Os pacotes perdidos ou muito atrasados levam à perda de informação em posições conhecidas (apagamentos). Contudo, em algumas situações as posições dos erros não são conhecidas e, portanto, a detecção dos erros tem que ser realizada usando um polinómio conhecido. A detecção e correcção de erros são estudadas para sinais digitais em códigos DFT em dois canais que apresentam muito melhor estabilidade que os respectivos códigos DFT num único canal. Para a estrutura de dois canais, um canal processa um código DFT normal, quanto que o outro canal inclui uma permutação, a razão principal para a melhoria na estabilidade. A permutação introduz aleatoriedade e é esta aleatoriedade que é responsável pela boa estabilidade destes códigos. O estudo dos códigos aleatórios vêm confirmar esta afirmação. Para sinais analógicos, foca-se a amostragem funcional e derivativa, onde um canal processa amostras do sinal e o outro processa amostras da derivada do sinal. A expansão sobreamostrada é apresentada e a recuperação de apagamentos é estudada. Neste caso, a estabilidade para a esturtura em dois canais quando a perda de amostras afecta ambos os canais é, em geral, muito pobre. Adicionalmente, a reconstrução de sinais tanto analógicos como digitais é tratada para o modelo do conversor integrate-and-fire. A reconstrução faz uso dos tempos de acção e de valores limites inerentes ao modelo e é viável por meio de um método iterativo baseado em projecções em conjuntos convexos (POCS).
In ATM as in real time transmissions over IP networks, the data are transmitted packet by packet. Lost or highly delayed packets lead to lost information in known locations (erasures). However, in some situations the error locations are not known and, therefore, error detection must be performed using a known polynomial. Error detection and correction are studied for digital signals in two-channel DFT codes which presents a much better stability than their single channel counterparts. For the two-channel structure, one channel processes an ordinary DFT code, while the other channel includes an interleaver, the main reason for the improvement in stability. The interleaver introduces randomness and it is this randomness that is responsible for the good stability of these codes. The study of random codes helps confirm this statement. For analogical signals, the focus is given to function and derivative sampling, where one channel processes samples of the signal and the other processes samples of the derivative of the signal. The oversampled expansion is presented and erasure recovery is studied. In this case, the stability of the twochannel structure when sample loss affects both channels is, in general, very poor. Additionally, the reconstruction of analogical as well as digital signals is dealt with for the integrate-and-fire converter model. The reconstruction makes use of the firing times and the threshold values inherent to the model and is viable by means of an iterative method based on projections onto convex sets (POCS).
APA, Harvard, Vancouver, ISO, and other styles
9

Sastry, Challa, Gilles Hennenfent, and Felix J. Herrmann. "Signal reconstruction from incomplete and misplaced measurements." European Association of Geoscientists & Engineers, 2007. http://hdl.handle.net/2429/550.

Full text
Abstract:
Constrained by practical and economical considerations, one often uses seismic data with missing traces. The use of such data results in image artifacts and poor spatial resolution. Sometimes due to practical limitations, measurements may be available on a perturbed grid, instead of on the designated grid. Due to algorithmic requirements, when such measurements are viewed as those on the designated grid, the recovery procedures may result in additional artifacts. This paper interpolates incomplete data onto regular grid via the Fourier domain, using a recently developed greedy algorithm. The basic objective is to study experimentally as to what could be the size of the perturbation in measurement coordinates that allows for the measurements on the perturbed grid to be considered as on the designated grid for faithful recovery. Our experimental work shows that for compressible signals, a uniformly distributed perturbation can be offset with slightly more number of measurements.
APA, Harvard, Vancouver, ISO, and other styles
10

Scrofani, James W. "Theory of multirate signal processing with applicatioin to signal and image reconstruction /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FScrofani%5FPhD.pdf.

Full text
Abstract:
Thesis (Ph.D. in Electrical Engineering)--Naval Postgraduate School, September 2005.
Thesis Advisor(s): Charles W. Therrien. Includes bibliographical references (p. 125-132). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
11

Scrofani, James W. "Theory of multirate signal processing with application to signal and image reconstruction." Monterey, California.: Naval Postgraduate School, 2005, 2005. http://hdl.handle.net/10945/10049.

Full text
Abstract:
Signal processing methods for signals sampled at di.erent rates are investigated and applied to the problem of signal and image reconstruction or superresolution reconstruction. The problem is approached from the viewpoint of linear mean-square estimation theory and multirate signal processing for one-and twodimensional signals. A new look is taken at multirate system theory in one and two dimensions which provides the framework for these methodologies. A careful analysis of linear optimal .ltering for problems involving di.erent input and output sampling rates is conducted. This results in the development of index mapping techniques that simplify the formulation of Wiener-Hopf equations whose solution determine the optimal .lters. The required filters exhibit periodicity in both one and two dimensions, due to the difference in sampling rates. The reconstruction algorithms developed are applied to one-and two-dimensional reconstruction problems.
APA, Harvard, Vancouver, ISO, and other styles
12

Venturini, Nicolas. "Experimental Broadband Signal Reconstruction for Plate-like Structures." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20470/.

Full text
Abstract:
In the Structural Health Monitoring (SHM) field, the Acoustic Emission (AE) technique is a passive method by which damage is localized and identified by capturing Lamb Waves (LW) signals propagating in a plate-like structure. The reconstruction of emitted signals from damage at the source location constitutes one of the main challenges faced by the SHM community. Recently, the application of a Frequencies Compensation Transfer Function (FCTF) has been used to reconstruct narrowband and broadband signals through a hybrid experimental and numerical Time Reversal (TR) process on aluminum plates. This study aims to reconstruct through experimental methods different types of narrowband and broadband signals on different plate-like structures making use of FCTF. In particular, Hanning Window (HW) and numerical broadband signals have been reconstructed for aluminum and steel plates. The results obtained in this study show how the FCTF method can be applied to different types of materials in plate-like structures. Moreover, the FCTF method has been applied on real broadband signals emitted by the Pencil Lead Break (PLB) technique and Rock Impact (RI) test. These last results prove that the FCTF method is able to compensate for the frequency changes on a single wave packet. Such results are fundamental, as they open the possibility to reconstruct any type of source signals emitted by any damage type.
APA, Harvard, Vancouver, ISO, and other styles
13

Lim, Taehyung. "Information-Theoretic Aspects of Signal Analysis and Reconstruction." Thesis, University of California, San Diego, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10637732.

Full text
Abstract:

The objective of this thesis is to develop a few approaches to wave theory of information. Specifically, this dissertation focuses on two special types of waveforms, bandlimited and multi-band signals. In both cases, we investigate the waveforms in the context of signal analysis and reconstructions.

In the first part of this thesis, we derive the amount of information that can be transmitted by bandlimited waveforms under perturbation, and the amount of information required to represent any bandlimited waveforms within a specific accuracy. These goals can be studied using a stochastic approach or a deterministic approach.Despite their shared goal of mathematically describing communication using the transmission of waveforms, as well as the common geometric intuition behind their arguments, the two approaches to information theory have evolved separately. The stochastic approach flourished in the context of communication, becoming the pillar of modern digital technologies, while the deterministic approach impacted mostly mathematical analysis. Recent interest in deterministic models has been raised in the context of networked control theory. This brings renewed attention to the deterministic approach in information theory. However, in contrast with the stochastic approaches where the tight results are already known, the previous deterministic results only provide the loose bounds. We improve these results by deriving tight results, and compare our results with the stochastic ones, which reveals the intrinsic similarities of two different approaches.

In the second part of this dissertation, we derive the minimum number of measurements to reconstruct multi-band waveforms, without any spectral information aside from the measure of the whole support set in the frequency domain. This problem is called the completely blind sensing problem and has been an open question. Until a recent date, partially blind sensing has been performed commonly instead, assuming to have some partial spectral information available a priori. We provide an answer for the completely blind sensing problem by deriving the minimum number of measurements to guarantee the reconstruction. The blind sensing problem shares some similarities with the compressed sensing problem. Despite these similarities, due to their different settings, the blind sensing problem contains a few additional difficulties which are not included in the compressed sensing problem. We independently develop our own theory to solve the completely blind sensing problem, and compare our results to those of the compressed sensing problem to reveal the similarities and differences between the two problems.

APA, Harvard, Vancouver, ISO, and other styles
14

Safar, Felix G. "Signal compression and reconstruction using multiple bases representation." Thesis, This resource online, 1988. http://scholar.lib.vt.edu/theses/available/etd-06112009-063321/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Moon, Thomas. "Testing and characterization of high-speed signals using incoherent undersampling driven signal reconstruction algorithms." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54326.

Full text
Abstract:
The objective of the proposed research is to develop a framework for the signal reconstruction algorithm with sub-Nyquist sampling rate and the low-cost hardware design in system level. A further objective of the proposed research is to monitor the device-under-test (DUT) and to adapt its behaviors. The key contribution of this research is that the high-speed signal acquisition is done by direct subsampling. As the signal is directly sampled without any front-end radio-frequency (RF) components such as mixers or filters, the cost of hardware is reduced. Furthermore, the distortion and the nonlinearity from the RF components can be avoided. The first proposed work is wideband signal reconstruction by dual-rate time-interleaved subsampling hardware and Multi-coset signal reconstruction. Using the combination of the dual-rate hardware and the multi-coset algorithm, the number of sampling channel is significantly reduced compared to the conventional multi-coset works. The second proposed work is jitter tracking by accurate period estimation with incoherent subsampling. In this work, the long-term jitter in PRBS is tracked without hardware synchronization and clock-data-recovery (CDR) circuits. The third proposed work is eye-monitoring and time-domain-reflectometry (TDR) by monobit receiver signal reconstruction. Using a monobit receiver based on incoherent subsampling and time-variant threshold signal, high resolution of reconstructed signal in both amplitude and time is achieved. Compared to a multibit-receiver, the scalability of the test-system is significantly increased.
APA, Harvard, Vancouver, ISO, and other styles
16

Yamada, Randy Matthew. "Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction Techniques." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50609.

Full text
Abstract:
Software-defined radios have the agility and flexibility to tune performance parameters, allowing them to adapt to environmental changes, adapt to desired modes of operation, and provide varied functionality as needed.  Traditional software-defined radios use a combination of conditional processing and software-tuned hardware to enable these features and will critically sample the spectrum to ensure that only the required bandwidth is digitized.  While flexible, these systems are still constrained to perform only a single function at a time and digitize a single frequency sub-band at time, possibly limiting the radio\'s effectiveness.  
Radio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion.  Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling.  Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications.  Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements.    
This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects.  First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing.  By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy.  We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce.  Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Dong, Jing. "Sparse analysis model based dictionary learning and signal reconstruction." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/811095/.

Full text
Abstract:
Sparse representation has been studied extensively in the past decade in a variety of applications, such as denoising, source separation and classification. Earlier effort has been focused on the well-known synthesis model, where a signal is decomposed as a linear combination of a few atoms of a dictionary. However, the analysis model, a counterpart of the synthesis model, has not received much attention until recent years. The analysis model takes a different viewpoint to sparse representation, and it assumes that the product of an analysis dictionary and a signal is sparse. Compared with the synthesis model, this model tends to be more expressive to represent signals, as a much richer union of subspaces can be described. This thesis focuses on the analysis model and aims to address the two main challenges: analysis dictionary learning (ADL) and signal reconstruction. In the ADL problem, the dictionary is learned from a set of training samples so that the signals can be represented sparsely based on the analysis model, thus offering the potential to fit the signals better than pre-defined dictionaries. Among the existing ADL algorithms, such as the well-known Analysis K-SVD, the dictionary atoms are updated sequentially. The first part of this thesis presents two novel analysis dictionary learning algorithms to update the atoms simultaneously. Specifically, the Analysis Simultaneous Codeword Optimization (Analysis SimCO) algorithm is proposed, by adapting the SimCO algorithm which is proposed originally for the synthesis model. In Analysis SimCO, the dictionary is updated using optimization on manifolds, under the $\ell_2$-norm constraints on the dictionary atoms. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to the existing ADL algorithms, the dictionary learned by Analysis SimCO may contain similar atoms. To address this issue, Incoherent Analysis SimCO is proposed by employing a coherence constraint and introducing a decorrelation step to enforce this constraint. The competitive performance of the proposed algorithms is demonstrated in the experiments for recovering synthetic dictionaries and removing additional noise in images, as compared with existing ADL methods. The second part of this thesis studies how to reconstruct signals with learned dictionaries under the analysis model. This is demonstrated by a challenging application problem: multiplicative noise removal (MNR) of images. In the existing sparsity motivated methods, the MNR problem is addressed using pre-defined dictionaries, or learned dictionaries based on the synthesis model. However, the potential of analysis dictionary learning for the MNR problem has not been investigated. In this thesis, analysis dictionary learning is applied to MNR, leading to two new algorithms. In the first algorithm, a dictionary learned based on the analysis model is employed to form a regularization term, which can preserve image details while removing multiplicative noise. In the second algorithm, in order to further improve the recovery quality of smooth areas in images, a smoothness regularizer is introduced to the reconstruction formulation. This regularizer can be seen as an enhanced Total Variation (TV) term with an additional parameter controlling the level of smoothness. To address the optimization problem of this model, the Alternating Direction Method of Multipliers (ADMM) is adapted and a relaxation technique is developed to allow variables to be updated flexibly. Experimental results show the superior performance of the proposed algorithms as compared with three sparsity or TV based algorithms for a range of noise levels.
APA, Harvard, Vancouver, ISO, and other styles
18

Sundaramoorthy, Gopalakrishnan. "Improved techniques for bispectral reconstruction of signals /." Online version of print, 1990. http://hdl.handle.net/1850/11456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cheng, Jiayi. "Investigating signal denoising and iterative reconstruction algorithms in photoacoustic tomography." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62689.

Full text
Abstract:
Photoacoustic tomography (PAT) is a promising biomedical imaging modality that achieves strong optical contrast and high ultrasound resolution. This technique is based on the photoacoustic (PA) effect which refers to illuminating the tissue by a nanosecond pulsed laser and generating acoustic waves by thermoelastic expansion. By detecting the PA waves, the initial pressure distribution that corresponds to the optical absorption map can be obtained by a reconstruction algorithm. In the linear array transducer based data acquisition system, the PA signals are contaminated with various noises. Also, the reconstruction suffers from artifacts and missing structures due to the limited detection view. We aim to reduce the effect of noise by a denoising preprocessing. The PAT system with a linear array transducer and a parallel data acquisition system (DAQ) has prominent band-shaped noise due to signal interference. The band-shaped noise is treated as a low-rank matrix, and the pure PA signal is treated as a sparse matrix, respectively. Robust principal component analysis (RPCA) algorithm is applied to extract the pure PA signal from the noise contaminated PA measurement. The RPCA approach is conducted on experiment data of different samples. The denoising results are compared with several methods and RPCA is shown to outperform the other methods. It is demonstrated that RPCA is promising in reducing the background noise in PA image reconstruction. We also aim to improve the iterative reconstruction. The variance reduced stochastic gradient descent (VR-SGD) algorithm is implemented in PAT reconstruction. A new forward projection matrix is also developed to more accurately match with the measurement data. Using different evaluation criteria, such as peak signal-to-noise ratio (PSNR), relative root-mean-square of reconstruction error (RRMSE) and line profile comparisons, the reconstructions from various iterative algorithms are compared. The advantages of VR-SGD are demonstrated on both simulation and experimental data. Our results indicate that VR-SGD in combination with the accurate projection matrix can lead to improved reconstruction in a small number of iterations. RPCA denoising and VR-SGD iterative reconstruction have been implemented in PAT. Our results show that RPCA and VR-SGD are promising approaches to improve the image reconstruction quality in PAT.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Lopes, David Manuel Baptista. "Signal reconstruction from partial or modified linear time frequency representations." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Carrión, García Alicia. "Signal Modality Characterization: from Phase Space Reconstruction to Real Applications." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/106366.

Full text
Abstract:
La caracterización de la modalidad de la señal es un nuevo concepto objeto de recientes trabajos de investigación cuyo principal propósito es identificar cambios en la naturaleza de señales reales. Con el término naturaleza de las señales se hace referencia al modelo subyacente que genera una señal desde el punto de vista de dos características principales: determinismo y linealidad. En esta tesis se emplea la modalidad de la señal para el procesado avanzado de señales acústicas, y en particular, en ensayos no destructivos de materiales no homogéneos como el hormigón. El problema de la caracterización de la modalidad comienza con la correcta reconstrucción del espacio de fases (Capítulo 2). Este nuevo dominio permite identificar los diferentes estados de una señal, recurrentes o no en función de su naturaleza determinista o aleatoria, respectivamente. En el ámbito de los ensayos no destructivos basados en ultrasonidos, el material se excita con una señal puramente determinista, sin embargo, la naturaleza de la señal recibida depende y es proporcional a la estructura interna del material. Esta hipótesis de trabajo permite plantear la medida del grado de determinismo como una alternativa complementaria a parámetros habituales de ultrasonidos como la atenuación y la velocidad. El nivel de determinismo ha resultado ser proporcional al nivel de porosidad en materiales cementantes (Capítulo 3). También permite la caracterización del nivel de daño de probetas de mortero sometidas a diferentes procesos de daño: ataque externo de sulfato y procesos de carga (Capítulo 4). El estudio de la no linealidad/ complejidad de una serie temporal se plantea inicialmente de forma ciega (sin tener información de la señal de entrada) mediante tests de hipótesis: generando datos surrogados y aplicando un test estadístico. Importantes avances se han logrado adaptando este enfoque a datos no estacionarios, característica habitual de señales no lineales reales. Los principales resultados en este sentido se han conseguido en la caracterización de la complejidad de señales oscilatorias de duración limitada (Capítulo 5). El concepto de la modalidad de la señal también se ha empleado para realizar un detallado estudio del fenómeno no lineal de espectroscopía acústica por impacto. Este análisis ha permitido entender las variables involucradas y plantear así un modelo matemático que caracterice el fenómeno. La comprensión del fenómeno y el modelo han permitido plantear un nuevo algoritmo de procesado equivalente a la técnica habitual NIRAS, pero óptimo en su aplicación. Esta alternativa de procesado puede suponer significativos avances sobre todo en aplicaciones industriales donde el tiempo y el esfuerzo son variables óptimas (Capítulo 6). Esta tesis demuestra que la caracterización de la modalidad de la señal no solo supone una alternativa a la caracterización de complejos fenómenos reales, sino que abre una nueva perspectiva de trabajo dentro del ámbito del procesado de señal. La medida del determinismo y el algoritmo FANSIRAS han demostrado que la modalidad de la señal es una interesante herramienta para futuros trabajos de caracterización de materiales cementantes.
The characterization of the modality of a signal is a new concept, which has been the subject of recent research. Its main purpose is to identify any changes in the nature of a real signal. The term `nature of a signal' refers to the underlying model that generates the signal from the point of view of two main characteristics: determinism and linearity. In this thesis, the modality of a signal is used for the advanced processing of acoustic signals, and in particular, in non-destructive tests of non-homogeneous materials, such as concrete. The problem of the characterization of the modality begins with the correct reconstruction of the phase space (Chapter 2). This new domain allows identifying the different states of a signal, as to whether they are recurrent or not, depending on whether they are deterministic, respectively, random. In the field of non-destructive testing based on ultrasound, the material is excited with a purely deterministic signal. However, the nature of the received signal depends on the internal structure of the material. This working hypothesis allows us to propose measuring the degree of determinism as a complementary alternative to the usual ultrasound parameters such, as attenuation and speed. The level of determinism has been found to be proportional to the level of porosity in cementitious materials (Chapter 3). It also allows characterizing the level of damage of mortar test pieces subjected to different kinds of damaging processes: external attack by sulphates, and loading processes (Chapter 4). The study of the non-linearity or complexity of a time series is initially presented blindly (without having information about the input signal) through hypothesis tests: generating surrogate data and applying a statistical test. Significant progress has been made in adapting this approach to nonstationary data, a common feature of real non-linear signals. The main results in this regard have been achieved in the characterization of the complexity of oscillatory signals of limited duration (Chapter 5). The concept of signal modality has also been used to perform a detailed study of the non-linear phenomenon of acoustic impact spectroscopy. This analysis has allowed understanding the variables involved, and thus, proposing a mathematical model that characterizes the phenomenon. The understanding of the phenomenon and the model have allowed proposing a new processing algorithm equivalent to the usual NIRAS technique, but optimal in its application. This processing alternative may mean significant advances, especially in industrial applications where time and e ort are variables to be optimized (Chapter 6). This thesis demonstrates that the characterization of the modality of a signal not only presents an alternative to the characterization of complicated real phenomena, but it also opens a new research perspective within the field of signal processing. The measure of determinism and the FANSIRAS algorithm have shown that the modality of a signal is an interesting tool for future research into the characterization of cementitious materials.
La caracterització de la modalitat del senyal és un nou concepte, objecte de recents treballs de recerca amb el propòsit d'identificar canvis en la natura de senyals reals. Amb el terme natura dels senyals es fa referència al model subjacent que genera un senyal des del punt de vista de dues característiques principals: determinisme i linealitat. En aquesta tesi es fa servir la modalitat del senyal per al processament avançat de senyals acústics i, en particular, en assajos no destructius de materials no homogenis com ara el formigó. El problema de la caracterització de la modalitat comença amb la correcta reconstrucció de l'espai de fase (Capítol 2). Aquest nou domini permet identificar els diferents estats d'un senyal, recurrents o no en funció de la seva natura determinista o aleatòria, respectivament. Dins l'àmbit dels assajos no destructius basats en ultrasons, el material s'excita amb un senyal purament determinista, tanmateix, la natura del senyal rebut depèn i és proporcional a l'estructura interna del material. Aquesta hipòtesi de treball permet plantejar la mesura del grau de determinisme com una alternativa complementària a paràmetres habituals dels ultrasons com ara l'atenuació i la velocitat. El nivell de determinisme ha resultat ésser proporcional al nivell de porositat en materials cementants (Capítol 3). També permet la caracterització del nivell de dany de provetes de morter sotmeses a diferents processos de dany: atac extern de sulfat i processos de càrrega (Capítol 4). L'estudi de la no linealitat/ complexitat d'una sèrie temporal es planteja inicialment de forma cega (sense tindre cap informació del senyal d'entrada) mitjançant tests d'hipòtesi: generant dades subrogades i aplicant un test estadístic. Avanços importants s'han aconseguit adaptant aquest enfoc a dades no estacionàries, característica habitual de senyals no lineals reals. Els principals resultats en aquest sentit s'han aconseguit en la caracterització de la complexitat de senyals oscil·latoris de durada limitada (Capítol 5). El concepte de modalitat del senyal també s'ha emprat per realitzar un detallat estudi del fenomen no lineal d'espectroscòpia acústica per impacte. Aquesta anàlisi ha permet entendre les variables involucrades i plantejar llavors un nou algoritme de processament equivalent a la tècnica habitual NIRAS, però òptim en la seva aplicació. Aquesta alternativa de processament pot suposar significatius avanços sobretot en aplicacions industrials, on el temps i l'esforç són variables òptimes (Capítol 6). Aquesta tesi demostra que la caracterització de la modalitat del senyal no solament suposa una alternativa a la caracterització de complexes fenòmens reals, sinó que obri una nova perspectiva de treball dins l'àmbit del processament de senyal. La mesura del determinisme i l'algoritme FANSIRAS han demostrat que la modalitat del senyal és una ferramenta interessant per a futurs treballs de caracterització de materials cementants.
Carrión García, A. (2018). Signal Modality Characterization: from Phase Space Reconstruction to Real Applications [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/106366
TESIS
APA, Harvard, Vancouver, ISO, and other styles
22

姚佩雯 and Pui-man Yiu. "Multiplier-less sinusoidal transformations and their applications to perfect reconstruction filter banks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31228045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Koppers, Simon [Verfasser], Dorit [Akademischer Betreuer] Merhof, and Thomas [Akademischer Betreuer] Schultz. "Signal enhancement and signal reconstruction for diffusion imaging using deep learning / Simon Koppers ; Dorit Merhof, Thomas Schultz." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1218727691/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nekkanti, Veera Venkata Satyanarayana, and Kaushik Sai Srinivas Nalajala. "Super Resolution Image Reconstruction for Indian Remote Sensing Satellite (Cartosat-1)." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Rundgren, Emil. "Automatic Volume Estimation of Timber from Multi-View Stereo 3D Reconstruction." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142513.

Full text
Abstract:
The ability to automatically estimate the volume of timber is becoming increasingly important within the timber industry. The large number of timber trucks arriving each day at Swedish timber terminals fortifies the need for a volume estimation performed in real-time and on-the-go as the trucks arrive. This thesis investigates if a volumetric integration of disparity maps acquired from a Multi-View Stereo (MVS) system is a suitable approach for automatic volume estimation of timber loads. As real-time execution is preferred, efforts were made to provide a scalable method. The proposed method was quantitatively evaluated on datasets containing two geometric objects of known volume. A qualitative comparison to manual volume estimates of timber loads was also made on datasets recorded at a Swedish timber terminal. The proposed method is shown to be both accurate and precise under specific circumstances. However, robustness is poor to varying weather conditions, although a more thorough evaluation of this aspect needs to be performed. The method is also parallelizable, which means that future efforts can be made to significantly decrease execution time.
APA, Harvard, Vancouver, ISO, and other styles
26

Nurdan, Kıvanç. "Data acquisition, event building and signal reconstruction for Compton camera imaging." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=979142636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Full text
Abstract:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
APA, Harvard, Vancouver, ISO, and other styles
28

Seifi, Mozhdeh. "Signal processing methods for fast and accurate reconstruction of digital holograms." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-01004605.

Full text
Abstract:
Techniques for fast, 3D, quantitative microscopy are of great interest in many fields. In this context, in-line digital holography has significant potential due to its relatively simple setup (lensless imaging), its three-dimensional character and its temporal resolution. The goal of this thesis is to improve existing hologram reconstruction techniques by employing an "inverse problems" approach. For applications of objects with parametric shapes, a greedy algorithm has been previously proposed which solves the (inherently ill-posed) inversion problem of reconstruction by maximizing the likelihood between a model of holographic patterns and the measured data. The first contribution of this thesis is to reduce the computational costs of this algorithm using a multi-resolution approach (FAST algorithm). For the second contribution, a "matching pursuit" type of pattern recognition approach is proposed for hologram reconstruction of volumes containing parametric objects, or non-parametric objects of a few shape classes. This method finds the closest set of diffraction patterns to the measured data using a diffraction pattern dictionary. The size of the dictionary is reduced by employing a truncated singular value decomposition to obtain a low cost algorithm. The third contribution of this thesis was carried out in collaboration with the laboratory of fluid mechanics and acoustics of Lyon (LMFA). The greedy algorithm is used in a real application: the reconstruction and tracking of free-falling, evaporating, ether droplets. In all the proposed methods, special attention has been paid to improvement of the accuracy of reconstruction as well as to reducing the computational costs and the number of parameters to be tuned by the user (so that the proposed algorithms are used with little or no supervision). A Matlab® toolbox (accessible on-line) has been developed as part of this thesis
APA, Harvard, Vancouver, ISO, and other styles
29

Malioutov, Dmitry M. 1981. "A sparse signal reconstruction perspective for source localization with sensor arrays." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87445.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 167-172).
by Dmitry M. Malioutov.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Ghannam, Fouzia. "Reconstruction de signal par convolution inverse ; application à un problème thermique." Poitiers, 2000. http://www.theses.fr/2000POIT2280.

Full text
Abstract:
Le travail de recherche presente dans ce memoire concerne la reconstruction de grandeurs physiques non observables a partir de la mesure d'autres variables qui leur sont reliees par un modele mathematique connu. Dans le cadre lineaire, cette reconstruction, connaissant la reponse impulsionnelle du systeme, est une deconvolution numerique. Il est bien connu que la deconvolution directe est un probleme inverse mal pose qui se traduit par une amplification des perturbations affectant les mesures de la sortie. On obtient une solution acceptable en modifiant le conditionnement numerique du probleme : la procedure de regularisation de tikhonov est certainement la plus connue des techniques de resolution des problemes inverses. L'approche traitee dans le memoire consiste a substituer une convolution inverse a l'inversion matricielle requise par la technique de tikhonov, grace a la determination explicite d'une reponse impulsionnelle inverse non causale. Celle-ci est obtenue par une inversion matricielle, mais de dimension nettement plus faible que celle du probleme initial ; de plus, une technique d'inversion recursive peut etre avantageusement utilisee. La simplicite et la rapidite de cette methode de calcul ont permis d'optimiser le coefficient de regularisation a l'aide d'un critere quadratique explicitant le necessaire compromis entre filtrage de la perturbation et la degradation de l'entree par la procedure de regularisation. Une solution sous-optimale est aussi proposee : elle permet de simplifier l'optimisation du critere, tout en fournissant une valeur acceptable du coefficient de regularisation. Cette methodologie a ensuite ete appliquee a un systeme physique base sur l'equation de conduction de la chaleur. Une etude en simulation a permis d'illustrer et de tester dans une situation academique les proprietes de la methodologie proposee. Enfin, une etude experimentale sur pilote de laboratoire a permis de valider la procedure complete, associant identification par erreur de sortie et reconstruction d'excitation par convolution inverse.
APA, Harvard, Vancouver, ISO, and other styles
31

Jidling, Carl. "Tailoring Gaussian processes for tomographic reconstruction." Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-394093.

Full text
Abstract:
A probabilistic model reasons about physical quantities as random variables that can be estimated from measured data. The Gaussian process is a respected member of this family, being a flexible non-parametric method that has proven strong capabilities in modelling a wide range of nonlinear functions. This thesis focuses on advanced Gaussian process techniques; the contribution consist of practical methodologies primarily intended for inverse tomographic applications. In our most theoretical formulation, we propose a constructive procedure for building a customised covariance function given any set of linear constraints. These are explicitly incorporated in the prior distribution and thereby guaranteed to be fulfilled by the prediction. One such construction is employed for strain field reconstruction, to which end we successfully introduce the Gaussian process framework. A particularly well-suited spectral based approximation method is used to obtain a significant reduction of the computational load. The formulation has seen several subsequent extensions, represented in this thesis by a generalisation that includes boundary information and uses variational inference to overcome the challenge provided by a nonlinear measurement model. We also consider X-ray computed tomography, a field of high importance primarily due to its central role in medical treatments. We use the Gaussian process to provide an alternative interpretation of traditional algorithms and demonstrate promising experimental results. Moreover, we turn our focus to deep kernel learning, a special construction in which the expressiveness of a standard covariance function is increased through a neural network input transformation. We develop a method that makes this approach computationally feasible for integral measurements, and the results indicate a high potential for computed tomography problems.
APA, Harvard, Vancouver, ISO, and other styles
32

Yngesjö, Tim. "3D Reconstruction from Satellite Imagery Using Deep Learning." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176622.

Full text
Abstract:
Learning-based multi-view stereo (MVS) has shown promising results in the domain of general 3D reconstruction. However, no work before this thesis has applied learning-based MVS to urban 3D reconstruction from satellite images. In this thesis, learning-based MVS is used to infer depth maps from satellite images. Models are trained on both synthetic and real satellite images from Las Vegas with ground truth data from a high-resolution aerial-based 3D model. This thesis also evaluates different methods for reconstructing digital surface models (DSM) and compares them to existing satellite-based 3D models at Maxar Technologies. The DSMs are created by either post-processing point clouds obtained from predicted depth maps or by an end-to-end approach where the depth map for an orthographic satellite image is predicted.  This thesis concludes that learning-based MVS can be used to predict accurate depth maps. Models trained on synthetic data yielded relatively good results, but not nearly as good as for models trained on real satellite images. The trained models also generalize relatively well to cities not present in training. This thesis also concludes that the reconstructed DSMs achieve better quantitative results than the existing 3D model in Las Vegas and similar results for the test sets from other cities. Compared to ground truth, the best-performing method achieved an L1 and L2 error of 14 % and 29 % lower than Maxar's current 3D model, respectively. The method that uses a point cloud as an intermediate step achieves better quantitative results compared to the end-to-end system. Very promising qualitative results are achieved with the proposed methods, especially when utilizing an end-to-end approach.
APA, Harvard, Vancouver, ISO, and other styles
33

Davis, Carlos Clifford Jr. "Iterative algorithms for the reconstruction of multidimensional signals from their projections." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Taji, Bahareh. "Reconstruction of ECG Signals Acquired with Conductive Textile Eletrodes." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26303.

Full text
Abstract:
Physicians’ understanding of bio-signals, measured using medical instruments, becomes the foundation of their decisions and diagnoses of patients, as they rely strongly on what the instruments show. Thus, it is critical and very important to ensure that the instruments’ readings exactly reflect what is happening in the patient’s body so that the detected signal is the real one or at least as close to the real in-body signal as possible and carries all of the appropriate information. This is such an important issue that sometimes physicians use invasive measurements in order to obtain the real bio-signal. Generating an in-body signal from what a measurement device shows is called “signal purification” or “reconstruction,” and can be done only when we have adequate information about the interface between the body and the monitoring device. In this research, first, we present a device that we developed for electrocardiogram (ECG) acquisition and transfer to PC. In order to evaluate the performance of the device, we use it to measure ECG and apply conductive textile as our ECG electrode. Then, we evaluate ECG signals captured by different electrodes, specifically traditional gel Ag/AgCl and dry golden plate electrodes, and compare the results. Next, we propose a method to reconstruct the ECG signal from the signal we detected with our device with respect to the interface characteristics and their relation to the detected ECG. The interface in this study is the skin-electrode interface for conductive textiles. In the last stage of this work, we explore the effects of pressure on skin-electrode interface impedance and its parametrical variation.
APA, Harvard, Vancouver, ISO, and other styles
35

Knapp, Bettina, and Lars Kaderali. "Reconstruction of Cellular Signal Transduction Networks Using Perturbation Assays and Linear Programming." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-127239.

Full text
Abstract:
Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4+ T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.
APA, Harvard, Vancouver, ISO, and other styles
36

Feng, Z. "A signal processing method for the acoustic image reconstruction of planar objects." Thesis, University of Portsmouth, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Samarasinghe, Kasun M. "Sparse Signal Reconstruction Modeling for MEG Source Localization Using Non-convex Regularizers." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439304367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Knapp, Bettina, and Lars Kaderali. "Reconstruction of Cellular Signal Transduction Networks Using Perturbation Assays and Linear Programming." Public Library of Science, 2013. https://tud.qucosa.de/id/qucosa%3A27289.

Full text
Abstract:
Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4+ T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.
APA, Harvard, Vancouver, ISO, and other styles
39

Tappenden, Rachael Elizabeth Helen. "Development & Implementation of Algorithms for Fast Image Reconstruction." Thesis, University of Canterbury. Mathematics and Statistics, 2011. http://hdl.handle.net/10092/5998.

Full text
Abstract:
Signal and image processing is important in a wide range of areas, including medical and astronomical imaging, and speech and acoustic signal processing. There is often a need for the reconstruction of these objects to be very fast, as they have some cost (perhaps a monetary cost, although often it is a time cost) attached to them. This work considers the development of algorithms that allow these signals and images to be reconstructed quickly and without perceptual quality loss. The main problem considered here is that of reducing the amount of time needed for images to be reconstructed, by decreasing the amount of data necessary for a high quality image to be produced. In addressing this problem two basic ideas are considered. The first is a subset selection problem where the aim is to extract a subset of data, of a predetermined size, from a much larger data set. To do this we first need some metric with which to measure how `good' (or how close to `best') a data subset is. Then, using this metric, we seek an algorithm that selects an appropriate data subset from which an accurate image can be reconstructed. Current algorithms use a criterion based upon the trace of a matrix. In this work we derive a simpler criterion based upon the determinant of a matrix. We construct two new algorithms based upon this new criterion and provide numerical results to demonstrate their accuracy and efficiency. A row exchange strategy is also described, which takes a given subset and performs interchanges to improve the quality of the selected subset. The second idea is, given a reduced set of data, how can we quickly reconstruct an accurate signal or image? Compressed sensing provides a mathematical framework that explains that if a signal or image is known to be sparse relative to some basis, then it may be accurately reconstructed from a reduced set of data measurements. The reconstruction process can be posed as a convex optimization problem. We introduce an algorithm that aims to solve the corresponding problem and accurately reconstruct the desired signal or image. The algorithm is based upon the Barzilai-Borwein algorithm and tailored specifically to the compressed sensing framework. Numerical experiments show that the algorithm is competitive with currently used algorithms. Following the success of compressed sensing for sparse signal reconstruction, we consider whether it is possible to reconstruct other signals with certain structures from reduced data sets. Specifically, signals that are a combination of a piecewise constant part and a sparse component are considered. A reconstruction process for signals of this type is detailed and numerical results are presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Johnson, David Graham. "Complex target reconstruction using near-field synthetic aperture radar." Thesis, The University of Sydney, 2009. http://hdl.handle.net/2123/18351.

Full text
Abstract:
This thesis describes the development of a prototype (imaging) radar system intended for discriminating between rocks of varying size under the severe environmental conditions typically present on a mine-site. Synthetic aperture radar (SAR) systems allow high-resolution measurements to be made using the motion of the sensing platform to provide at least one of the sensing degrees of freedom. They therefore provide a means of reducing the number and cost of mechanically actuated components which is particularly beneficial in a harsh environment such as a mine. A 3-D near-field imaging radar system has been developed utilising the highest available component bandwidth of 2-18GHz. This has allowed two-target range discrimination performance of better than 20mm to be obtained over a single sweptfrequency measurement within a custom-built anechoic test chamber. Bistatic antennas in an inverse-SAR configuration have been used to demonstrate the concept of a multistatic spherical-SAR system, which with a single transmitter has advantages in both cost and complexity over the equivalent multiple-transmit/receive configuration. The Fourier-domain point-target focussing templates for this bistatic configuration have then been derived using the method of stationary phase based on the earlier work of Fortuny. Further algorithms have then been developed for the calculation of templates corresponding to spheres of a particular radius, for both monostatic and bistatic configurations. Full 3-D reconstruction of complex-target topography has then been achieved through a novel sphere-summation process, with extensive simulated and experimental results obtained and analysed for both a set of spheres and a more realistic scenario consisting of a pile of rocks of varying shape and size.
APA, Harvard, Vancouver, ISO, and other styles
41

Ostrovskii, Dmitrii. "Reconstruction adaptative des signaux par optimisation convexe." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM004/document.

Full text
Abstract:
Nous considérons le problème de débruitage d'un signal ou d'une image observés dans le bruit gaussien. Dans ce problème les estimateurs linéaires classiques sont quasi-optimaux quand l'ensemble des signaux, qui doit être convexe et compact, est connu a priori. Si cet ensemble n'est pas spécifié, la conception d'un estimateur adaptatif qui ``ne connait pas'' la structure cachée du signal reste un problème difficile. Dans cette thèse, nous étudions une nouvelle famille d'estimateurs des signaux satisfaisant certains propriétés d'invariance dans le temps. De tels signaux sont caractérisés par leur structure harmonique, qui est généralement inconnu dans la pratique.Nous proposons des nouveaux estimateurs capables d'exploiter la structure harmonique inconnue du signal è reconstruire. Nous démontrons que ces estimateurs obéissent aux divers "inégalités d'oracle," et nous proposons une implémentation algorithmique numériquement efficace de ces estimateurs basée sur des algorithmes d'optimisation de "premier ordre." Nous évaluons ces estimateurs sur des données synthétiques et sur des signaux et images réelles
We consider the problem of denoising a signal observed in Gaussian noise.In this problem, classical linear estimators are quasi-optimal provided that the set of possible signals is convex, compact, and known a priori. However, when the set is unspecified, designing an estimator which does not ``know'' the underlying structure of a signal yet has favorable theoretical guarantees of statistical performance remains a challenging problem. In this thesis, we study a new family of estimators for statistical recovery of signals satisfying certain time-invariance properties. Such signals are characterized by their harmonic structure, which is usually unknown in practice. We propose new estimators which are capable to exploit the unknown harmonic structure of a signal to reconstruct. We demonstrate that these estimators admit theoretical performance guarantees, in the form of oracle inequalities, in a variety of settings.We provide efficient algorithmic implementations of these estimators via first-order optimization algorithm with non-Euclidean geometry, and evaluate them on synthetic data, as well as some real-world signals and images
APA, Harvard, Vancouver, ISO, and other styles
42

Vernhes, Jean-Adrien. "Échantillonnage Non Uniforme : Application aux filtrages et aux conversions CAN/CNA (Convertisseurs Analogique-Numérique et Numérique/Analogique) dans les télécommunications par satellite." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/15601/1/JAVernhes.pdf.

Full text
Abstract:
La théorie de l'échantillonnage uniforme des signaux, développée en particulier par C. Shannon, est à l'origine du traitement numérique du signal. Depuis, de nombreux travaux ont été consacrés à l'échantillonnage non uniforme. Celui-ci permet, d'une part, de modéliser les imperfections des dispositifs d'échantillonnage uniforme. D'autre part, l'échantillonnage peut être effectué de manière délibérément non uniforme afin de bénéficier de propriétés particulières, notamment un assouplissement des conditions portant sur le choix de la fréquence moyenne d'échantillonnage. La plupart de ces travaux reste dans un cadre théorique en adoptant des schémas d'échantillonnage et des modèles de signaux simplifiés. Or, actuellement, dans de nombreux domaines d'application, tels que les communications par satellites, la conversion analogique-numérique s'effectue sous des contraintes fortes pour les largeurs de bande mises en jeu, en raison notamment des fréquences très élevées utilisées. Ces conditions opérationnelles accentuent les imperfections des dispositifs électroniques réalisant l'échantillonnage et induisent le choix de modèles de signaux et de schémas d'échantillonnage spécifiques. Cette thèse a pour objectif général d'identifier des modèles d'échantillonnage adaptés à ce cadre applicatif. Ceux-ci s'appliquent à des signaux aléatoires passe-bande, qui constituent un modèle classique en télécommunications. Ils doivent prendre en compte des facteurs technologiques, économiques ainsi que des contraintes bord de complexité et éventuellement intégrer des fonctionnalités propres aux télécommunications. La première contribution de cette thèse est de développer des formules d'échantillonnage non uniforme qui intègrent dans le domaine numérique des fonctionnalités délicates à implémenter dans le domaine analogique aux fréquences considérées. La deuxième contribution consiste à caractériser et à compenser les erreurs de synchronisation de dispositifs d'échantillonnage non uniforme particuliers, à savoir les convertisseurs analogique-numérique entrelacés temporellement, via des méthodes supervisées ou aveugles.
APA, Harvard, Vancouver, ISO, and other styles
43

Choi, Hyun. "Jitter measurement of high-speed digital signals using low-cost signal acquisition hardware and associated algorithms." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34661.

Full text
Abstract:
This dissertation proposes new methods for measuring jitter of high-speed digital signals. The proposed techniques are twofold. First, a low-speed jitter measurement environment is realized by using a jitter expansion sensor. This sensor uses a low-frequency reference signal as compared to high-frequency reference signals required in standard high-speed signal jitter measurement instruments. The jitter expansion sensor generates a low-speed signal at the output, which contains jitter content of the original high-speed digital signal. The low-speed sensor output signal can be easily acquired with a low-speed digitizer and then analyzed for jitter. The proposed low-speed jitter measurement environment using the jitter expansion sensor enhances the reliability of current jitter measurement approaches since low-speed signals used as a reference signal and a sensor output signal can be generated and applied to measurement systems with reduced additive noise. The second approach is direct digitization without using a sensor, in which a high-speed digital signal with jitter is incoherently sub-sampled and then reconstructed in the discrete-time domain by using digital signal reconstruction algorithms. The core idea of this technique is to remove the hardware required in standard sampling-based jitter measurement instruments for time/phase synchronization by adopting incoherent sub-sampling as compared to coherent sub-sampling and to reduce the need for a high-speed digitizer by sub-sampling a periodic signal over its many realizations. In the proposed digitization technique, the signal reconstruction algorithms are used as a substitute for time/phase synchronization hardware. When the reconstructed signal is analyzed for jitter in digital post-processing, a self-reference signal is extracted from the reconstructed signal by using wavelet denoising methods. This digitally generated self-reference signal alleviates the need for external analog reference signals. The self-reference signal is used as a timing reference when timing dislocations of the reconstructed signal are measured in the discrete-time domain. Various types of jitter of the original high-speed reference signals can be estimated using the proposed jitter analysis algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Habool, Al-Shamery Maitham. "Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/79826/.

Full text
Abstract:
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
APA, Harvard, Vancouver, ISO, and other styles
45

Davis, Philip. "Quantifying the Gains of Compressive Sensing for Telemetering Applications." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595775.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
In this paper we study a new streaming Compressive Sensing (CS) technique that aims to replace high speed Analog to Digital Converters (ADC) for certain classes of signals and reduce the artifacts that arise from block processing when conventional CS is applied to continuous signals. We compare the performance of both streaming and block processing methods on several types of signals and quantify the signal reconstruction quality when packet loss is applied to the transmitted sampled data.
APA, Harvard, Vancouver, ISO, and other styles
46

Chan, Chi-wing. "Design of 1-D and 2-D perfect reconstruction filter banks /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20717908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Victorin, Amalia. "Multi-taper method for spectral analysis and signal reconstruction of solar wind data." Thesis, KTH, Rymd- och plasmafysik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91824.

Full text
Abstract:
Fluctuations in the solar wind characteristics such as speed, temperature, magnetic strength and density are associated with pulsations in the magnetosphere. Coherent magnetohydrodynamic waves in the solar wind may sometimes be a direct source of periodic pulsations in the frequency interval 1 to 7 mHz in the magnetosphere. In studies of the solar wind and the way its variation affects the magnetosphere, the significance of different frequency components and their signal fonn are of interest. Spectral analysis and signal reconstruction are important tools in these studies and in this report the MultiTaper Method (MTM) of spectral analysis is compared to the "classic" method, using the Hanning window and Fourier transformation. The MTM-SSA toolkit, developed by Department of Atmospheric Science at the University of California, is used to ascertain whether the MTM might be suitable. The advantages of the MTM are reduced information loss in analysed data sequences and statistical support in the analysis. Besides the compared methods of spectral analysis, an attempt has been made to test the validity of the adiabatic law, assumed as the relation between the thermal pressure and the density in the solar wind plasma. It was unfortunately difficult to estimate the gamma parameter of this relation, possibly due to the turbulent behaviour of the solar wind.
APA, Harvard, Vancouver, ISO, and other styles
48

Vuppamandla, Kalyana. "Real-time implementation of signal reconstruction algorithm for time-based a/d converters." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lethmate, Ralf. "Novel radial scan strategies and image reconstruction in MRI." Lyon 1, 2001. http://www.theses.fr/2001LYO10272.

Full text
Abstract:
Récemment, un fort regain d'intérêt pour les techniques d'échantillonnage radia en Imagerie de Résonance Magnétique (IRM) est perceptible. Elles permettent d'imager des objets ayant des temps de relaxation transversal très courts et sont peu sensibles aux mouvements. Nous proposons dans ce travail 1) de nouvelles stratégies d'échantillonnage radial 2D/3D et 2) des algorithmes avancés de reconstruction d'image IRM, tels que la techniques de "gridding" utilisant des compensations de densité original, les méthodes Bayésiennes et BURS. Ces algorithmes constituent une avancée considérable dans le monde IRM puisque la reconstruction d'image est possible à partir de tout échantillonnage. Pour augmenter l'intensité du signal, il est avantageux de l'échantillonner dès la montée des gradients, les positions des échantillons diffèrent alors des positions idéales des distributions utilisées (Projection Reconstruction (PR-2D) et linogramme). Or, la reconstruction de l'image nécessite la connaissance précise de ces dernières, qui peuvent être estimées grâce à une expérience préliminaire ou une approche fondée sur la transformation de Gabor. En imagerie 3D, nous proposons cinq équidistributions isotropes que nous comparons à la technique PR-3D, qui souffre d'un sur-échantillonnage excessif sur les pôles. Nous avons mis l'accent sur la qualité d'image, la facilité d'implantation sur l'imageur et els temps d'acquisition qui peuvent ainsi être réduits de 30%. À notre connaissance, ces équidistributions n'ont jamais été appliquées à l'IRM auparavant. Nous proposons également une nouvelle méthode d'imagerie dynamique 3D, prometteuse pour l'angiographie, l'imagerie de perfusion, etc. Elle est fondée sur ces équidistributions et utilise une nouvelle approche "keyhole-sphérique". Tout en ajoutant la dimension temporelle, le temps d'acquisition reste identique à celui d'une acquisition radiale 3D classique. Les résultats sont présentés pour des pseudo-données et des données réelles.
APA, Harvard, Vancouver, ISO, and other styles
50

Poisson, J. B. "Reconstruction de trajectoires de cibles mobiles en imagerie RSO circulaire aéroportée." Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 2013. http://tel.archives-ouvertes.fr/tel-01002732.

Full text
Abstract:
L'imagerie RSO circulaire aéroportée permet d'obtenir de nombreuses informations sur les zones imagées et sur les cibles mobiles. Les objets peuvent être observés sous plusieurs angles, et l'illumination continue d'une même scène permet de générer plusieurs images successives de la même zone. L'objectif de cette thèse est de développer une méthode de reconstruction de trajectoire de cibles mobiles en imagerie RSO circulaire monovoie, et d'étudier les performances de la méthode proposée. Nous avons tout d'abord mesuré les coordonnées apparentes des cibles mobiles sur les images RSO et leur paramètre de défocalisation. Ceci permet d'obtenir des informations de mouvement des cibles, notamment de vitesse et d'accélération. Nous avons ensuite utilisé ces mesures pour définir un système d'équations non-linéaires permettant de faire le lien entre les trajectoires réelles des cibles mobiles et leurs trajectoires apparentes. Par une analyse mathématique et numérique de la stabilité de ce système, nous avons montré que seul un modèle de cible mobile avec une vitesse constante permet de reconstruire précisément les trajectoires des cibles mobiles, sous réserve d'une excursion angulaire suffisante. Par la suite, nous avons étudié l'influence de la résolution des images sur les performances de reconstruction des trajectoires, en calculant théoriquement les précisions de mesure et les précisions de reconstruction qui en découlent. Nous avons mis en évidence l'existence théorique d'une résolution azimutale optimale, dépendant de la radiométrie des cibles et de la validité des modèles étudiés. Finalement nous avons validé la méthode développée sur deux jeux de données réelles acquises par le capteur SETHI et RAMSES NG de l'ONERA en bande X, et confirmé les analyses théoriques des performances de cette méthode.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography