Dissertations / Theses on the topic 'Imaging inverse problems'

To see the other types of publications on this topic, follow the link: Imaging inverse problems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Imaging inverse problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leung, Wun Ying Valerie. "Inverse problems in astronomical and general imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2002. http://hdl.handle.net/10092/7513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object.
2

Szasz, Teodora. "Advanced beamforming techniques in ultrasound imaging and the associated inverse problems." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30221/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'imagerie ultrasonore (US) permet de réaliser des examens médicaux non invasifs avec des méthodes d'acquisition rapides à des coûts modérés. L'imagerie cardiaque, abdominale, fœtale, ou mammaire sont quelques-unes des applications où elle est largement utilisée comme outil de diagnostic. En imagerie US classique, des ondes acoustiques sont transmises à une région d'intérêt du corps humain. Les signaux d'écho rétrodiffusés, sont ensuite formés pour créer des lignes radiofréquences. La formation de voies (FV) joue un rôle clé dans l'obtention des images US, car elle influence la résolution et le contraste de l'image finale. L'objectif de ce travail est de modéliser la formation de voies comme un problème inverse liant les données brutes aux signaux RF. Le modèle de formation de voies proposé ici améliore le contraste et la résolution spatiale des images échographiques par rapport aux techniques de FV existants. Dans un premier temps, nous nous sommes concentrés sur des méthodes de FV en imagerie US. Nous avons brièvement passé en revue les techniques de formation de voies les plus courantes, en commencent par la méthode par retard et somme standard puis en utilisant les techniques de formation de voies adaptatives. Ensuite, nous avons étudié l'utilisation de signaux qui exploitent une représentation parcimonieuse de l'image US dans le cadre de la formation de voies. Les approches proposées détectent les réflecteurs forts du milieu sur la base de critères bayésiens. Nous avons finalement développé une nouvelle façon d'aborder la formation de voies en imagerie US, en la formulant comme un problème inverse linéaire liant les échos réfléchis au signal final. L'intérêt majeur de notre approche est la flexibilité dans le choix des hypothèses statistiques sur le signal avant la formation de voies et sa robustesse dans à un nombre réduit d'émissions. Finalement, nous présentons une nouvelle méthode de formation de voies pour l'imagerie US basée sur l'utilisation de caractéristique statistique des signaux supposée alpha-stable
Ultrasound (US) allows non-invasive and ultra-high frame rate imaging procedures at reduced costs. Cardiac, abdominal, fetal, and breast imaging are some of the applications where it is extensively used as diagnostic tool. In a classical US scanning process, short acoustic pulses are transmitted through the region-of-interest of the human body. The backscattered echo signals are then beamformed for creating radiofrequency(RF) lines. Beamforming (BF) plays a key role in US image formation, influencing the resolution and the contrast of final image. The objective of this thesis is to model BF as an inverse problem, relating the raw channel data to the signals to be recovered. The proposed BF framework improves the contrast and the spatial resolution of the US images, compared with the existing BF methods. To begin with, we investigated the existing BF methods in medical US imaging. We briefly review the most common BF techniques, starting with the standard delay-and-sum BF method and emerging to the most known adaptive BF techniques, such as minimum variance BF. Afterwards, we investigated the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well-known Bayesian Information Criteria used in statistical modeling. Furthermore, we propose a new way of addressing the BF in US imaging, by formulating it as a linear inverse problem relating the reflected echoes to the signal to be recovered. Our approach offers flexibility in the choice of statistical assumptions on the signal to be beamformed and it is robust to a reduced number of pulse emissions. At the end of this research, we investigated the use of the non-Gaussianity properties of the RF signals in the BF process, by assuming alpha-stable statistics of US images
3

Gregson, James. "Applications of inverse problems in fluids and imaging." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Three applications of inverse problems relating to fluid imaging and image deblurring are presented. The first two, tomographic reconstruction of dye concentration fields from multi-view video and deblurring of photographs, are addressed by a stochastic optimization scheme that allows a wide variety of priors to be incorporated into the reconstruction process within a straightforward framework. The third, estimation of fluid velocities from volumetric dye concentration fields, highlights a previously unexplored connection between fluid simulation and proximal algorithms from convex optimization. This connection allows several classical imaging inverse problems to be investigated in the context of fluids, including optical flow, denoising and deconvolution. The connection also allows inverse problems to be incorporated into fluid simulation for the purposes of physically-based regularization of optical flow and for stylistic modifications of fluid captures. Through both methods and all three applications the importance of incorporating domain-specific priors into inverse problems for fluids and imaging is highlighted.
Science, Faculty of
Computer Science, Department of
Graduate
4

Lecharlier, Loïc. "Blind inverse imaging with positivity constraints." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans les problèmes inverses en imagerie, on suppose généralement connu l’opérateur ou matrice décrivant le système de formation de l’image. De façon équivalente pour un système linéaire, on suppose connue sa réponse impulsionnelle. Toutefois, ceci n’est pas une hypothèse réaliste pour de nombreuses applications pratiques pour lesquelles cet opérateur n’est en fait pas connu (ou n’est connu qu’approximativement). On a alors affaire à un problème d’inversion dite “aveugle”. Dans le cas de systèmes invariants par translation, on parle de “déconvolution aveugle” car à la fois l’image ou objet de départ et la réponse impulsionnelle doivent être estimées à partir de la seule image observée qui résulte d’une convolution et est affectée d’erreurs de mesure. Ce problème est notoirement difficile et pour pallier les ambiguïtés et les instabilités numériques inhérentes à ce type d’inversions, il faut recourir à des informations ou contraintes supplémentaires, telles que la positivité qui s’est avérée un levier de stabilisation puissant dans les problèmes d’imagerie non aveugle. La thèse propose de nouveaux algorithmes d’inversion aveugle dans un cadre discret ou discrétisé, en supposant que l’image inconnue, la matrice à inverser et les données sont positives. Le problème est formulé comme un problème d’optimisation (non convexe) où le terme d’attache aux données à minimiser, modélisant soit le cas de données de type Poisson (divergence de Kullback-Leibler) ou affectées de bruit gaussien (moindres carrés), est augmenté par des termes de pénalité sur les inconnues du problème. La stratégie d’optimisation consiste en des ajustements alternés de l’image à reconstruire et de la matrice à inverser qui sont de type multiplicatif et résultent de la minimisation de fonctions coût “surrogées” valables dans le cas positif. Le cadre assez général permet d’utiliser plusieurs types de pénalités, y compris sur la variation totale (lissée) de l’image. Une normalisation éventuelle de la réponse impulsionnelle ou de la matrice est également prévue à chaque itération. Des résultats de convergence pour ces algorithmes sont établis dans la thèse, tant en ce qui concerne la décroissance des fonctions coût que la convergence de la suite des itérés vers un point stationnaire. La méthodologie proposée est validée avec succès par des simulations numériques relatives à différentes applications telle que la déconvolution aveugle d'images en astronomie, la factorisation en matrices positives pour l’imagerie hyperspectrale et la déconvolution de densités en statistique.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
5

Zhang, Wenlong. "Forward and Inverse Problems Under Uncertainty." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE024/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse contient deux matières différentes. Dans la première partie, deux cas sont considérés. L'un est le modèle plus lisse de la plaque mince et l'autre est les équations des limites elliptiques avec des données limites incertaines. Dans cette partie, les convergences stochastiques des méthodes des éléments finis sont prouvées pour chaque problème.Dans la deuxième partie, nous fournissons une analyse mathématique du problème inverse linéarisé dans la tomographie d'impédance électrique multifréquence. Nous présentons un cadre mathématique et numérique pour une procédure d'imagerie du tenseur de conductivité électrique anisotrope en utilisant une nouvelle technique appelée Tentomètre de diffusion Magnéto-acoustographie et proposons une approche de contrôle optimale pour reconstruire le facteur de propriété intrinsèque reliant le tenseur de diffusion au tenseur de conductivité électrique anisotrope. Nous démontrons la convergence et la stabilité du type Lipschitz de l'algorithme et présente des exemples numériques pour illustrer sa précision. Le modèle cellulaire pour Electropermécanisme est démontré. Nous étudions les paramètres efficaces dans un modèle d'homogénéisation. Nous démontrons numériquement la sensibilité de ces paramètres efficaces aux paramètres microscopiques critiques régissant l'électropermécanisme
This thesis contains two different subjects. In first part, two cases are considered. One is the thin plate spline smoother model and the other one is the elliptic boundary equations with uncertain boundary data. In this part, stochastic convergences of the finite element methods are proved for each problem.In second part, we provide a mathematical analysis of the linearized inverse problem in multifrequency electrical impedance tomography. We present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor using a novel technique called Diffusion Tensor Magneto-acoustography and propose an optimal control approach for reconstructing the cross-property factor relating the diffusion tensor to the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy. The cell model for Electropermeabilization is demonstrated. We study effective parameters in a homogenization model. We demonstrate numerically the sensitivity of these effective parameters to critical microscopic parameters governing electropermeabilization
6

Zhu, Sha. "A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00844748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
7

Alfowzan, Mohammed Fowzan, and Mohammed Fowzan Alfowzan. "Solutions to Space-Time Inverse Problems." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two inverse problems are investigated in this dissertation, taking into account both the spatial and temporal aspects. The first problem addresses the under determined image reconstruction problem for dynamic SPECT. The quality of the reconstructed image is often limited due to having fewer observations than the number of voxels. The proposed algorithms make use of the generalized α-divergence function to improve the estimation performance. The first algorithm is based on an alternating minimization framework to minimize a regularized α-divergence objective function. We demonstrate that selecting an adaptive α policy depending on the time evolution of the voxels gives better performance than a fixed α assignment. The second algorithm is based on Newton's method. A regularized approach has been taken to avoid stability issues. Newton's method is generally computationally demanding due to the complexity associated with inverting the Hessian matrix. A fast Newton-based method is proposed using majorization-minimization techniques that diagonalize the Hessian matrix. In dynamically evolving systems, the prediction matrix plays an important role in the estimation process. An estimation technique is proposed to estimate the prediction matrix using the α-divergence function. The simulation results show that our algorithms provide better performance than the techniques based on the Kullback-Leibler distance. The second problem is the recovery of data transmitted over free-space optical communication channels using orbital angular momentum (OAM). In the presence of atmospheric turbulence, crosstalk occurs among OAM optical modes resulting in an error floor at a relatively high bit error rate. The modulation format considered for the underlying problem is Q-ary pulse position modulation (PPM). We propose and evaluate three joint detection strategies to overcome the OAM crosstalk problem: i) maximum likelihood sequence estimation (MLSE). ii) Q-PPM factor graph detection. iii) branch-and-bound detection. We compare the complexity and the bit-error-rate performance of these strategies in realistic scenarios.
8

Rückert, Nadja. "Studies on two specific inverse problems from imaging and finance." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-91587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
9

Som, Subhojit. "Topics in Sparse Inverse Problems and Electron Paramagnetic Resonance Imaging." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282135281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zamanian, Sam Ahmad. "Hierarchical Bayesian approaches to seismic imaging and other geophysical inverse problems." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 189-196).
In many geophysical inverse problems, smoothness assumptions on the underlying geologic model are utilized to mitigate the effects of poor data coverage and observational noise and to improve the quality of the inferred model parameters. In the context of Bayesian inference, these smoothness assumptions take the form of a prior distribution on the model parameters. Conventionally, the regularization parameters defining these assumptions are fixed independently from the data or tuned in an ad hoc manner. However, it is often the case that the smoothness properties of the true earth model are not known a priori, and furthermore, these properties may vary spatially. In the seismic imaging problem, for example, where the objective is to estimate the earth's reflectivity, the reflectivity model is smooth along a particular reflector but exhibits a sharp contrast in the direction orthogonal to the reflector. In such cases, defining a prior using predefined smoothness assumptions may result in posterior estimates of the model that incorrectly smooth out these sharp contrasts. In this thesis, we explore the application of Bayesian inference to different geophysical inverse problems and seek to address issues related to smoothing by appealing to the hierarchical Bayesian framework. We capture the smoothness properties of the prior distribution on the model by defining a Markov random field (MRF) on the set of model parameters and assigning weights to the edges of the underlying graph; we refer to these parameters as the edge strengths of the MRF. We investigate two cases where the smoothing is specified a priori and introduce a method for estimating the edge strengths of the MRF. In the first part of this thesis, we apply a Bayesian inference framework (where the edge strengths of the MRF are predetermined) to the problem of characterizing the fractured nature of a reservoir from seismic data. Our methodology combines different features of the seismic data, particularly P-wave reflection amplitudes and scattering attributes, to allow for estimation of fracture properties under a larger physical regime than would be attainable using only one of these data types. Through this application, we demonstrate the capability of our parameterization of the prior distribution with edge strengths to both enforce smoothness in the estimates of the fracture properties and capture a priori information about geological features in the model (such as a discontinuity that may arise in the presence of a fault). We solve the inference problem via loopy belief propagation to approximate the posterior marginal distributions of the fracture properties, as well as their maximum a posteriori (MAP) and Bayes least squares estimates. In the second part of the thesis, we investigate how the parameters defining the prior distribution are connected to the model covariance and address the question of how to optimize these parameters in the context of the seismic imaging problem. We formulate the seismic imaging problem within the hierarchical Bayesian setting, where the edge strengths are treated as random variables to be inferred from the data, and provide a framework for computing the marginal MAP estimate of the edge strengths by application of the expectation-maximization (E-M) algorithm. We validate our methodology on synthetic datasets arising from 2-D models. The images we obtain after inferring the edge strengths exhibit the desired spatially-varying smoothness properties and yield sharper, more coherent reflectors. In the final part of the thesis, we shift our focus and consider the problem of timelapse seismic processing, where the objective is to detect changes in the subsurface over a period of time using repeated seismic surveys. We focus on the realistic case where the surveys are taken with differing acquisition geometries. In such situations, conventional methods for processing time-lapse data involve inverting surveys separately and subtracting the inversion models to estimate the change in model parameters; however, such methods often perform poorly as they do not correctly account for differing model uncertainty between surveys due to differences in illumination and observational noise. Applying the machinery explored in the previous chapters, we formulate the time-lapse processing problem within the hierarchical Bayesian setting and present a framework for computing the marginal MAP estimate of the time-lapse change model using the E-M algorithm. The results of our inference framework are validated on synthetic data from a 2-D time-lapse seismic imaging example, where the hierarchical Bayesian estimates significantly outperform conventional time-lapse inversion results.
by Sam Ahmad Zamanian.
Ph. D.
11

Bhandari, Ayush. "Inverse problems in time-of-flight imaging : theory, algorithms and applications." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 100-108).
Time-of-Fight (ToF) cameras utilize a combination of phase and amplitude information to return real-time, three dimensional information of a scene in form of depth images. Such cameras have a number of scientific and consumer oriented applications. In this work, we formalize a mathematical framework that leads to unifying perspective on tackling inverse problems that arise in the ToF imaging context. Starting from first principles, we discuss the implications of time and frequency domain sensing of a scene. From a linear systems perspective, this amounts to an operator sampling problem where the operator depends on the physical parameters of a scene or the bio-sample being investigated. Having presented some examples of inverse problems, we discuss detailed solutions that benefit from scene based priors such sparsity and rank constraints. Our theory is corroborated by experiments performed using ToF/Kinect cameras. Applications of this work include multi-bounce light decomposition, ultrafast imaging and fluorophore lifetime estimation.
by Ayush Bhandari.
S.M.
12

Yin, Ke. "New algorithms for solving inverse source problems in imaging techniques with applications in fluorescence tomography." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis is devoted to solving the inverse source problem arising in image reconstruction problems. In general, the solution is non-unique and the problem is severely ill-posed. Therefore, small perturbations, such as the noise in the data, and the modeling error in the forward problem, will cause huge errors in the computations. In practice, the most widely used method to tackle the problem is based on Tikhonov-type regularizations, which minimizes a cost function combining a regularization term and a data fitting term. However, because the two tasks, namely regularization and data fitting, are coupled together in Tikhonov regularization, they are difficult to solve. It happens even if each task can be efficiently solved when they are separate. We propose a method to overcome the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, separately. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills the regularization and other physical requirements. The key idea is that the correction function in the kernel has no impact to the data fitting, and the regularization is imposed in a smaller space. Moreover, there is no parameter needed to balance the data fitting and regularization terms. As a case study, we apply the proposed method to Fluorescence Tomography (FT), an emerging imaging technique well known for its ill-posedness and low image resolution in existing reconstruction techniques. We demonstrate by theory and examples that the proposed algorithm can drastically improve the computation speed and the image resolution over existing methods.
13

Wintz, Timothée. "Super-resolution in wave imaging." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE052/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les différentes modalités d’imagerie par ondes présentent chacune des limitations en termes de résolution ou de contraste. Dans ce travail, nous modélisons l’imagerie ultrasonore ultrarapide et présentons des méthodes de reconstruction qui améliorent la précision de l’imagerie ultrasonore. Nous introduisons deux méthodes qui permettent d’augmenter le contraste et de mesurer la position super-résolue et la vitesse dans les vaisseaux sanguins. Nous présentons aussi une méthode de reconstruction des paramètres microscopiques en tomographie d’impédance électrique en utilisant des mesures multifréquence et en s’aidant de la théorie de l’homogénéisation
Different modalities in wave imaging each present limitations in terms of resolution or contrast. In this work, we present a mathematical model of the ultrafast ultrasound imaging modality and reconstruction methods which can improve contrast and resolution in ultrasonic imaging. We introduce two methods which allow to improve contrast and to locate blood vessels belowthe diffraction limit while simultaneously estimating the blood velocity. We also present a reconstruction method in electrical impedance tomography which allows reconstruction of microscopic parameters from multi-frequency measurements using the theory of homogenization
14

Hugelier, Siewert. "Approaches to inverse problems in chemical imaging : applications in super-resolution and spectral unmixing." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10144/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie chimique permet d’accéder à la distribution spatiale des espèces chimiques. Nous distinguerons dans cette thèse deux types d’images différents: les images spatiales-temporelles et les images spatiales-spectrales.La microscopie de fluorescence super-résolue a commencé avec un faible nombre de fluorophores actifs par image. Actuellement, ça a évolué vers l’imagerie en haute densité qui requiert de nouvelles façons d’analyse. Nous proposons SPIDER, une approche de déconvolution par moindres carrés pénalisés. La considération de plusieurs pénalités permet de traduire les propriétés des émetteurs utilisés dans l'imagerie de fluorescence super-résolue. L'utilisation de cette méthode permet d'étudier des changements structuraux et morphologiques dans les échantillons biologiques. La méthode a été appliquée à l’imagerie sur cellules vivantes d’une cellule HEK-293T encodée par la protéine fluorescente DAKAP-Dronpa. On a pu obtenir une résolution spatiale de 55nm pour un temps d’acquisition de 0.5s.La résolution d'images hyperspectrales avec MCR-ALS fournit des informations spatiales et spectrales des contributions individuelles dans le mélange. Néanmoins, le voisinage des pixels est perdu du fait du dépliement du cube de données hyperspectrales sous forme d’une matrice bidirectionnelle. L’implémentation de contraintes spatiales n’est donc pas possible en MCR-ALS. Nous proposons une approche alternative dans laquelle une étape de repliement/dépliement est effectuée à chaque itération qui permet d’ajouter des fonctionnalités spatiales globales à la palette des contraintes. Nous avons développé plusieurs contraintes et on montre leur application aux données expérimentales
Besides the chemical information, chemical imaging also offers insights in the spatial distribution of the samples. Within this thesis, we distinguish between two different types of images: spatial-temporal images (super-resolution fluorescence microscopy) and spatial-spectral images (unmixing). In early super-resolution fluorescence microscopy, a low number of fluorophores were active per image. Currently, the field evolves towards high-density imaging that requires new ways of analysis. We propose SPIDER, an image deconvolution approach with multiple penalties. These penalties directly translate the properties of the blinking emitters used in super-resolution fluorescence microscopy imaging. SPIDER allows investigating highly dynamic structural and morphological changes in biological samples with a high fluorophore density. We applied the method on live-cell imaging of a HEK-293T cell labeled with DAKAP-Dronpa and demonstrated a spatial resolution down to 55 nm and a time sampling of 0.5 s. Unmixing hyperspectral images with MCR-ALS provides spatial and spectral information of the individual contributions in the mixture. Due to loss of the pixel neighborhood during the unfolding of the hyperspectral data cube to a two-way matrix, spatial information cannot be added as a constraint during the analysis We therefore propose an alternative approach in which an additional refolding/unfolding step is performed in each iteration. This data manipulation allows global spatial features to be added to the palette of MCR-ALS constraints. From this idea, we also developed several constraints and show their application on experimental data
15

Hugelier, Siewert. "Approaches to inverse problems in chemical imaging : applications in super-resolution and spectral unmixing." Electronic Thesis or Diss., Lille 1, 2017. http://www.theses.fr/2017LIL10144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie chimique permet d’accéder à la distribution spatiale des espèces chimiques. Nous distinguerons dans cette thèse deux types d’images différents: les images spatiales-temporelles et les images spatiales-spectrales.La microscopie de fluorescence super-résolue a commencé avec un faible nombre de fluorophores actifs par image. Actuellement, ça a évolué vers l’imagerie en haute densité qui requiert de nouvelles façons d’analyse. Nous proposons SPIDER, une approche de déconvolution par moindres carrés pénalisés. La considération de plusieurs pénalités permet de traduire les propriétés des émetteurs utilisés dans l'imagerie de fluorescence super-résolue. L'utilisation de cette méthode permet d'étudier des changements structuraux et morphologiques dans les échantillons biologiques. La méthode a été appliquée à l’imagerie sur cellules vivantes d’une cellule HEK-293T encodée par la protéine fluorescente DAKAP-Dronpa. On a pu obtenir une résolution spatiale de 55nm pour un temps d’acquisition de 0.5s.La résolution d'images hyperspectrales avec MCR-ALS fournit des informations spatiales et spectrales des contributions individuelles dans le mélange. Néanmoins, le voisinage des pixels est perdu du fait du dépliement du cube de données hyperspectrales sous forme d’une matrice bidirectionnelle. L’implémentation de contraintes spatiales n’est donc pas possible en MCR-ALS. Nous proposons une approche alternative dans laquelle une étape de repliement/dépliement est effectuée à chaque itération qui permet d’ajouter des fonctionnalités spatiales globales à la palette des contraintes. Nous avons développé plusieurs contraintes et on montre leur application aux données expérimentales
Besides the chemical information, chemical imaging also offers insights in the spatial distribution of the samples. Within this thesis, we distinguish between two different types of images: spatial-temporal images (super-resolution fluorescence microscopy) and spatial-spectral images (unmixing). In early super-resolution fluorescence microscopy, a low number of fluorophores were active per image. Currently, the field evolves towards high-density imaging that requires new ways of analysis. We propose SPIDER, an image deconvolution approach with multiple penalties. These penalties directly translate the properties of the blinking emitters used in super-resolution fluorescence microscopy imaging. SPIDER allows investigating highly dynamic structural and morphological changes in biological samples with a high fluorophore density. We applied the method on live-cell imaging of a HEK-293T cell labeled with DAKAP-Dronpa and demonstrated a spatial resolution down to 55 nm and a time sampling of 0.5 s. Unmixing hyperspectral images with MCR-ALS provides spatial and spectral information of the individual contributions in the mixture. Due to loss of the pixel neighborhood during the unfolding of the hyperspectral data cube to a two-way matrix, spatial information cannot be added as a constraint during the analysis We therefore propose an alternative approach in which an additional refolding/unfolding step is performed in each iteration. This data manipulation allows global spatial features to be added to the palette of MCR-ALS constraints. From this idea, we also developed several constraints and show their application on experimental data
16

Wei, Hsin-Yu. "Magnetic induction tomography for medical and industrial imaging : hardware and software development." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The main topics of this dissertation are the hardware and the software developments in magnetic induction tomography imaging techniques. In the hardware sections, all the tomography systems developed by the author will be presented and discussed in detail. The developed systems can be divided into two categories, according to the property of the target imaging materials: high conductivity materials and low conductivity materials. Each system has its own suitable application, and each will thus be tested under different circumstances. In terms of the software development, the forward and inverse problems have been studied, including the eddy current problem modeling, sensitivity map formulae derivation and iterative/non-iterative inverse solvers equations. The Biot-Savart Theory was implemented in the ‘two-potential’ method that was used in the eddy current model in order to improve the system’s flexibility. Many different magnetic induction tomography schemes are proposed for the first time in this field of research, their aim being to improve the spatial and temporal resolution of the final reconstructed images. These novel schemes usually involve some modifications of the system hardware and forward/inverse calculations. For example, the rotational scheme can improve the ill-posedness and edge detectability of the system; the volumetric scheme can provide extra spatial resolution in the axial direction; and the temporal scheme can improve the temporal resolution by using the correlation between the consecutive datasets. Volumetric imaging requires an intensive amount of extra computational resources. To overcome the issue of memory constraints when solving large-scale inverse problems, a matrix-free method was proposed, also for the first time in magnetic induction tomography. All the proposed algorithms are verified by the experimental data obtained from suitable tomography systems developed by the author. Although magnetic induction tomography is a new imaging technique, it is believed that the technique is well developed for real-life applications. Several potential applications for magnetic induction tomography are suggested. The initial proof-of-concept study for a challenging low conductivity two-phase flow imaging process is provided. In this thesis, a range of contributions have been made in the field of magnetic induction tomography, which will help the magnetic induction tomography research to be carried on further.
17

GUASTAVINO, SABRINA. "Learning and inverse problems: from theory to solar physics applications." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/998315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of approximating a function from a set of discrete measurements has been extensively studied since the seventies. Our theoretical analysis proposes a formalization of the function approximation problem which allows dealing with inverse problems and supervised kernel learning as two sides of the same coin. The proposed formalization takes into account arbitrary noisy data (deterministically or statistically defined), arbitrary loss functions (possibly seen as a log-likelihood), handling both direct and indirect measurements. The core idea of this part relies on the analogy between statistical learning and inverse problems. One of the main evidences of the connection occurring across these two areas is that regularization methods, usually developed for ill-posed inverse problems, can be used for solving learning problems. Furthermore, spectral regularization convergence rate analyses provided in these two areas, share the same source conditions but are carried out with either increasing number of samples in learning theory or decreasing noise level in inverse problems. Even more in general, regularization via sparsity-enhancing methods is widely used in both areas and it is possible to apply well-known $ell_1$-penalized methods for solving both learning and inverse problems. In the first part of the Thesis, we analyze such a connection at three levels: (1) at an infinite dimensional level, we define an abstract function approximation problem from which the two problems can be derived; (2) at a discrete level, we provide a unified formulation according to a suitable definition of sampling; and (3) at a convergence rates level, we provide a comparison between convergence rates given in the two areas, by quantifying the relation between the noise level and the number of samples. In the second part of the Thesis, we focus on a specific class of problems where measurements are distributed according to a Poisson law. We provide a data-driven, asymptotically unbiased, and globally quadratic approximation of the Kullback-Leibler divergence and we propose Lasso-type methods for solving sparse Poisson regression problems, named PRiL for Poisson Reweighed Lasso and an adaptive version of this method, named APRiL for Adaptive Poisson Reweighted Lasso, proving consistency properties in estimation and variable selection, respectively. Finally we consider two problems in solar physics: 1) the problem of forecasting solar flares (learning application) and 2) the desaturation problem of solar flare images (inverse problem application). The first application concerns the prediction of solar storms using images of the magnetic field on the sun, in particular physics-based features extracted from active regions from data provided by Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). The second application concerns the reconstruction problem of Extreme Ultra-Violet (EUV) solar flare images recorded by a second instrument on board SDO, the Atmospheric Imaging Assembly (AIA). We propose a novel sparsity-enhancing method SE-DESAT to reconstruct images affected by saturation and diffraction, without using any a priori estimate of the background solar activity.
18

Burvall, Anna. "Axicon imaging by scalar diffraction theory." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Axicons are optical elements that produce Bessel beams,i.e., long and narrow focal lines along the optical axis. Thenarrow focus makes them useful ine.g. alignment, harmonicgeneration, and atom trapping, and they are also used toincrease the longitudinal range of applications such astriangulation, light sectioning, and optical coherencetomography. In this thesis, axicons are designed andcharacterized for different kinds of illumination, using thestationary-phase and the communication-modes methods.

The inverse problem of axicon design for partially coherentlight is addressed. A design relation, applicable toSchell-model sources, is derived from the Fresnel diffractionintegral, simplified by the method of stationary phase. Thisapproach both clarifies the old design method for coherentlight, which was derived using energy conservation in raybundles, and extends it to the domain of partial coherence. Thedesign rule applies to light from such multimode emitters aslight-emitting diodes, excimer lasers and some laser diodes,which can be represented as Gaussian Schell-model sources.

Characterization of axicons in coherent, obliqueillumination is performed using the method of stationary phase.It is shown that in inclined illumination the focal shapechanges from the narrow Bessel distribution to a broadasteroid-shaped focus. It is proven that an axicon ofelliptical shape will compensate for this deformation. Theseresults, which are all confirmed both numerically andexperimentally, open possibilities for using axicons inscanning optical systems to increase resolution and depthrange.

Axicons are normally manufactured as refractive cones or ascircular diffractive gratings. They can also be constructedfrom ordinary spherical surfaces, using the sphericalaberration to create the long focal line. In this dissertation,a simple lens axicon consisting of a cemented doublet isdesigned, manufactured, and tested. The advantage of the lensaxicon is that it is easily manufactured.

The longitudinal resolution of the axicon varies. The methodof communication modes, earlier used for analysis ofinformation content for e.g. line or square apertures, isapplied to the axicon geometry and yields an expression for thelongitudinal resolution. The method, which is based on abi-orthogonal expansion of the Green function in the Fresneldiffraction integral, also gives the number of degrees offreedom, or the number of information channels available, forthe axicon geometry.

Keywords:axicons, diffractive optics, coherence,asymptotic methods, communication modes, information content,inverse problems

19

Alberti, Giovanni S. "On local constraints and regularity of PDE in electromagnetics : applications to hybrid imaging inverse problems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1b30b3b7-29b1-410d-ae30-bd0a87c9720b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The first contribution of this thesis is a new regularity theorem for time harmonic Maxwell's equations with less than Lipschitz complex anisotropic coefficients. By using the Lp theory for elliptic equations, it is possible to prove H1 and Hölder regularity results, provided that the coefficients are W1,p for some p = 3. This improves previous regularity results, where the assumption W1,∞ for the coefficients was believed to be optimal. The method can be easily extended to the case of bi-anisotropic materials, for which a separate approach turns out to be unnecessary. The second focus of this work is the boundary control of the Helmholtz and Maxwell equations to enforce local constraints inside the domain. More precisely, we look for suitable boundary conditions such that the corresponding solutions and their derivatives satisfy certain local non-zero constraints. Complex geometric optics solutions can be used to construct such illuminations, but are impractical for several reasons. We propose a constructive approach to this problem based on the use of multiple frequencies. The suitable boundary conditions are explicitly constructed and give the desired constraints, provided that a finite number of frequencies, given a priori, are chosen in a fixed range. This method is based on the holomorphicity of the solutions with respect to the frequency and on the regularity theory for the PDE under consideration. This theory finds applications to several hybrid imaging inverse problems, where the unknown coefficients have to be imaged from internal measurements. In order to perform the reconstruction, we often need to find suitable boundary conditions such that the corresponding solutions satisfy certain non-zero constraints, depending on the particular problem under consideration. The multiple frequency approach introduced in this thesis represents a valid alternative to the use of complex geometric optics solutions to construct such boundary conditions. Several examples are discussed.
20

Cao, Xiande. "Volume and Surface Integral Equations for Solving Forward and Inverse Scattering Problems." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this dissertation, a hybrid volume and surface integral equation is used to solve scattering problems. It is implemented with RWG basis on the surface and the edge basis in the volume. Numerical results shows the correctness of the hybrid VSIE in inhomogeneous medium. The MLFMM method is also implemented for the new VSIEs. Further more, a synthetic apature radar imaging method is used in a 2D microwave imaging for complex objects. With the mono-static and bi-static interpolation scheme, a 2D FFT is applied for the imaging with the data simulated with VSIE method. Then we apply a background cancelling scheme to improve the imaging quality for the targets in interest. Numerical results shows the feasibility of applying the background canceling into wider applications.
21

Kim, Yong Yook. "Inverse Problems In Structural Damage Identification, Structural Optimization, And Optical Medical Imaging Using Artificial Neural Networks." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/11111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of this work was to employ artificial neural networks (NN) to solve inverse problems in different engineering fields, overcoming various obstacles in applying NN to different problems and benefiting from the experience of solving different types of inverse problems. The inverse problems investigated are: 1) damage detection in structures, 2) detection of an anomaly in a light-diffusive medium, such as human tissue using optical imaging, 3) structural optimization of fiber optic sensor design. All of these problems require solving highly complex inverse problems and the treatments benefit from employing neural networks which have strength in generalization, pattern recognition, and fault tolerance. Moreover, the neural networks for the three problems are similar, and a method found suitable for solving one type of problem can be applied for solving other types of problems. Solution of inverse problems using neural networks consists of two parts. The first is repeatedly solving the direct problem, obtaining the response of a system for known parameters and constructing the set of the solutions to be used as training sets for NN. The next step is training neural networks so that the trained neural networks can produce a set of parameters of interest for the response of the system. Mainly feed-forward backpropagation NN were used in this work. One of the obstacles in applying artificial neural networks is the need for solving the direct problem repeatedly and generating a large enough number of training sets. To reduce the time required in solving the direct problems of structural dynamics and photon transport in opaque tissue, the finite element method was used. To solve transient problems, which include some of the problems addressed here, and are computationally intensive, the modal superposition and the modal acceleration methods were employed. The need for generating a large enough number of training sets required by NN was fulfilled by automatically generating the training sets using a script program in the MATLAB environment. This program automatically generated finite element models with different parameters, and the program also included scripts that combined the whole solution processes in different engineering packages for the direct problem and the inverse problem using neural networks. Another obstacle in applying artificial neural networks in solving inverse problems is that the dimension and the size of the training sets required for the NN can be too large to use NN effectively with the available computational resources. To overcome this obstacle, Principal Component Analysis is used to reduce the dimension of the inputs for the NN without excessively impairing the integrity of the data. Orthogonal Arrays were also used to select a smaller number of training sets that can efficiently represent the given system.
Ph. D.
22

Nicu, Ana-Maria. "Approximation and representation of functions on the sphere : applications to inverse problems in geodesy and medical imaging." Nice, 2012. http://www.theses.fr/2012NICE4007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse est construite autour de l’approximation et de la représentation des fonctions sur la sphère avec des applications pour des problèmes inverses issus de la géodésie et de l’imagerie médicale. Le plan de la thèse est structuré de la façon suivante : Dans le premier chapitre, on donne le cadre général d’un problème inverse ainsi que la description du problème de la géophysique et de la M/EGG. L’idée d’un problème inverse est de retrouver une densité à l’intérieur d’un domaine (la boule unité modélisant la terre ou le cerveau humain), à partir des données des mesures d’un certain potentiel à la surface du domaine. On continue en donnant les principales définitions et les principaux théorèmes qu’on utilisera tout au long de la thèse. De plus, la résolution du problème inverse consiste dans la résolution de deux problèmes : transmission de données et localisation de sources à l’intérieur de la boule. En pratique, les données mesurées ne sont disponibles que sur des parties de la sphère : calottes sphériques, hémisphère nord de la tête (M/EEG), continents (géodésie). Pour représenter ce type de données, on construit la base de Slepian qui a de bonnes propriétés sur les régions étudiées. Dans le chapitre 4 on s’intéresse au problème d’estimation de données sur la sphère entière (leur développement sous la base des harmoniques sphériques) à partir des mesures partielles bruitées. Une fois qu’on connaît ce développement, on applique la méthode du meilleur approximant rationnel sur des sections planes de la sphère (chapitre 5). Ce chapitre traite trois types de densité : monopolaire, dipolaire et inclusions pour la modélisation des problèmes, ainsi que des propriétés de la densité et du potentiel associé, quantités mises en relation par un certain opérateur. Dans le chapitre 6, on regarde les chapitres 3, 4 et 5 du point de vue numérique. On présente des tests numériques pour la localisation de sources dans la géodésie et M/EGG lorsqu’on dispose des données partielles sur la sphère
This work concerns the representation and approximation of functions on a sphere with applications to source localization inverse problems in geodesy and medical imaging. The thesis is structured in 6 chapters as follow : Chapter 1 presents an introduction to the geodesy and M/EGG inverse problems. The inverse problem (IP) consists in recovering a density inside the ball (Earth, human brain) from partially known data on the surface. Chapter 2 gives the mathematical background used along the thesis. The resolution of the inverse problem (IP) involves the resolution of two steps : the transmission data problem (TP) and the density recovery (DR) problem. In practice, the data are only available on some region of the sphere, as a spherical cap, like the north hemisphere of the head (M/EGG) or continent (geodesy). For this purpose, in chapter 3, we give an efficient method to build the appropriate Slepian basis on which we express the data. This is set up by using Gauss-Legendre quadrature. The transmission data problem (chapter 4) consists in estimating the data (spherical harmonic expansion) over the whose sphere from noisy measurements expressed in Slepian basis. The second step, density recovery (DR) problem, is detailed in chapter 5 where we study three density models (monopolar, dipolar and inclusions). For the resolution of (DR), we use a best quadratic rational approximation method on planar sections. We give also some properties of the density and the operator which links it to the generated potential. In chapter 6, we study the chapter 3, 4 and 5 from numerical point of view. We present some numerical tests to illustrate source localization results for geodesy and M/EGG problems when we dispose of partial data on the sphere
23

Hart, Vern Philip II. "The Application of Tomographic Reconstruction Techniques to Ill-Conditioned Inverse Problems in Atmospheric Science and Biomedical Imaging." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA’s Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near-infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the diffused signal, simulations must be incorporated into the algorithm, which describe the sporadic photon migration. The third paper presents a novel Monte Carlo technique derived from the optical scattering solution for spheroidal particles designed to mimic mitochondria and deformed cell nuclei. Simulated results of optical diffusion are presented. The potential for improving existing imaging modalities through continual development of sparse tomography and optical scattering methods is discussed.
24

Veras, Johann. "Electrical Conductivity Imaging via Boundary Value Problems for the 1-Laplacian." Doctoral diss., University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study an inverse problem which seeks to image the internal conductivity map of a body by one measurement of boundary and interior data. In our study the interior data is the magnitude of the current density induced by electrodes. Access to interior measurements has been made possible since the work of M. Joy et al. in early 1990s and couples two physical principles: electromagnetics and magnetic resonance. In 2007 Nachman et al. has shown that it is possible to recover the conductivity from the magnitude of one current density field inside. The method now known as Current Density Impedance Imaging is based on solving boundary value problems for the 1-Laplacian in an appropriate Riemann metric space. We consider two types of methods: the ones based on level sets and a variational approach, which aim to solve specific boundary value problem associated with the 1-Laplacian. We will address the Cauchy and Dirichlet problems with full and partial data, and also the Complete Electrode Model (CEM). The latter model is known to describe most accurately the voltage potential distribution in a conductive body, while taking into account the transition of current from the electrode to the body. For the CEM the problem is non-unique. We characterize the non-uniqueness, and explain which additional measurements fix the solution. Multiple numerical schemes for each of the methods are implemented to demonstrate the computational feasibility.
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
25

Paleo, Pierre. "Méthodes itératives pour la reconstruction tomographique régularisée." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT070/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Au cours des dernières années, les techniques d'imagerie par tomographie se sont diversifiées pour de nombreuses applications. Cependant, des contraintes expérimentales conduisent souvent à une acquisition de données limitées, par exemple les scans rapides ou l'imagerie médicale pour laquelle la dose de rayonnement est une préoccupation majeure. L'insuffisance de données peut prendre forme d'un faible rapport signal à bruit, peu de vues, ou une gamme angulaire manquante. D'autre part, les artefacts nuisent à la qualité de reconstruction. Dans ces contextes, les techniques standard montrent leurs limitations. Dans ce travail, nous explorons comment les méthodes de reconstruction régularisée peuvent répondre à ces défis. Ces méthodes traitent la reconstruction comme un problème inverse, et la solution est généralement calculée par une procédure d'optimisation. L'implémentation de méthodes de reconstruction régularisée implique à la fois de concevoir une régularisation appropriée, et de choisir le meilleur algorithme d'optimisation pour le problème résultant. Du point de vue de la modélisation, nous considérons trois types de régularisations dans un cadre mathématique unifié, ainsi que leur implémentation efficace : la variation totale, les ondelettes et la reconstruction basée sur un dictionnaire. Du point de vue algorithmique, nous étudions quels algorithmes d'optimisation de l'état de l'art sont les mieux adaptés pour le problème et l'architecture parallèle cible (GPU), et nous proposons un nouvel algorithme d'optimisation avec une vitesse de convergence accrue. Nous montrons ensuite comment les modèles régularisés de reconstruction peuvent être étendus pour prendre en compte les artefacts usuels : les artefacts en anneau et les artefacts de tomographie locale. Nous proposons notamment un nouvel algorithme quasi-exact de reconstruction en tomographie locale
In the last years, there have been a diversification of the tomography imaging technique for many applications. However, experimental constraints often lead to limited data - for example fast scans, or medical imaging where the radiation dose is a primary concern. The data limitation may come as a low signal to noise ratio, scarce views or a missing angle wedge.On the other hand, artefacts are detrimental to reconstruction quality.In these contexts, the standard techniques show their limitations.In this work, we explore how regularized tomographic reconstruction methods can handle these challenges.These methods treat the problem as an inverse problem, and the solution is generally found by the means of an optimization procedure.Implementing regularized reconstruction methods entails to both designing an appropriate regularization, and choosing the best optimization algorithm for the resulting problem.On the modelling part, we focus on three types of regularizers in an unified mathematical framework, along with their efficient implementation: Total Variation, Wavelets and dictionary-based reconstruction. On the algorithmic part, we study which state-of-the-art convex optimization algorithms are best fitted for the problem and parallel architectures (GPU), and propose a new algorithm for an increased convergence speed.We then show how the standard regularization models can be extended to take the usual artefacts into account, namely rings and local tomography artefacts. Notably, a novel quasi-exact local tomography reconstruction method is proposed
26

Lu, Wei. "Hough transforms for shape identification and applications im medical image processing /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3115568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fromenteze, Thomas. "Développement d'une technique de compression passive appliquée à l'imagerie microonde." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0061/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces travaux portent sur le développement d'une technique de compression appliquée à la simplification des systèmes d'imagerie dans le domaine microonde. Cette approche repose sur le développement de composants passifs capables de compresser les ondes émises et reçues, autorisant ainsi une réduction du nombre de modules actifs nécessaires au fonctionnement de certaines architectures de radars. Ce principe est basé sur l'exploitation de la diversité modale présente dans les composants développés, le rendant compatible avec l'utilisation de très larges bandes passantes. Plusieurs preuves de concept sont réalisées au moyen de différents composants étudiés dans cet ouvrage, permettant d'adapter cette technique à de nombreuses spécifications d'architectures et de bandes passantes
This work is focused on the development of a compressive technique applied to the simplification of microwave imaging systems. This principle is based on the study of passive devices able to compress transmitted and received waves, allowing for the reduction of the hardware complexity required by radar systems. This approach exploits the modal diversity in the developed components, making it compatible with ultra wide bandwidth. Several proofs of concept are presented using different passive devices, allowing this technique to be adapted to a large variety of architectures and bandwidths
28

Zeitler, Armin. "Investigation of mm-wave imaging and radar systems." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00832647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the last decade, microwave and millimeter-wave systems have gained importance in civil and security applications. Due to an increasing maturity and availability of circuits and components, these systems are getting more compact while being less expensive. Furthermore, quantitative imaging has been conducted at lower frequencies using computational intensive inverse problem algorithms. Due to the ill-posed character of the inverse problem, these algorithms are, in general, very sensitive to noise: the key to their successful application to experimental data is the precision of the measurement system. Only a few research teams investigate systems for imaging in the W-band. In this manuscript such a system is presented, designed to provide scattered field data to quantitative reconstruction algorithms. This manuscript is divided into six chapters. Chapter 2 describes the theory to compute numerically the scattered fields of known objects. In Chapter 3, the W-band measurement setup in the anechoic chamber is shown. Preliminary measurement results are analyzed. Relying on the measurement results, the error sources are studied and corrected by post-processing. The final results are used for the qualitative reconstruction of all three targets of interest and to image quantitatively the small cylinder. The reconstructed images are compared in detail in Chapter 4. Close range imaging has been investigated using a vector analyzer and a radar system. This is described in Chapter 5, based on a future application, which is the detection of FOD on airport runways. The conclusion is addressed in Chapter 6 and some future investigations are discussed.
29

Guerrero, prado Patricio. "Reconstruction tridimensionnelle des objets plats du patrimoine à partir du signal de diffusion inélastique." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV035/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La caractérisation tridimensionnelle de matériaux anciens plats est restée une activité non évidente à accomplir par des méthodes classiques de tomographie à rayons X en raison de leur morphologie anisotrope et de leur géométrie aplatie.Pour surmonter les limites de ces méthodologies, une modalité d'imagerie basée sur le rayonnement diffusé Compton est étudiée dans ce travail. La tomographie classique aux rayons X traite les données de diffusion Compton comme du bruit ajouté au processus de formation d'image, tandis que dans la tomographie du rayonnement diffusé, les conditions sont définies de sorte que la diffusion inélastique devienne le phénomène dominant dans la formation d'image. Dans ces conditions, les rotations relatives entre l'échantillon et la configuration d'imagerie ne sont plus nécessaires. Mathématiquement, ce problème est résolu par la transformée de Radon conique. Le problème direct où la sortie du système est l'image spectrale obtenue à partir d'un objet d'entrée est modélisé. Dans le problème inverse une estimation de la distribution tridimensionnelle de la densité électronique de l'objet d'entrée à partir de l'image spectrale est proposée. La faisabilité de cette méthodologie est supportée par des simulations numériques
Three-dimensional characterization of flat ancient material objects has remained a challenging activity to accomplish by conventional X-ray tomography methods due to their anisotropic morphology and flattened geometry.To overcome the limitations of such methodologies, an imaging modality based on Compton scattering is studied in this work. Classical X-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able to avoid relative rotations between the sample and the imaging setup. Mathematically this problem is addressed by means of the conical Radon transform. A model of the direct problem is presented where the output of the system is the spectral image obtained from an input object. The inverse problem is addressed to estimate the 3D distribution of the electronic density of the input object from the spectral image. The feasibility of this methodology is supported by numerical simulations
30

Henriksson, Tommy. "CONTRIBUTION TO QUANTITATIVE MICROWAVE IMAGING TECHNIQUES FOR BIOMEDICAL APPLICATIONS." Doctoral thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation presents a contribution to quantitative microwave imaging for breast tumor detection. The study made in the frame of a joint supervision Ph.D. thesis between University Paris-SUD 11 (France) and Mälardalen University (Sweden), has been conducted through two experimental microwave imaging setups, the existing 2.45 GHz planar camera (France) and the multi-frequency flexible robotic system, (Sweden), under development. In this context a 2D scalar flexible numerical tool based on a Newton-Kantorovich (NK) scheme, has been developed. Quantitative microwave imaging is a three dimensional vectorial nonlinear inverse scattering problem, where the complex permittivity of an object is reconstructed from the measured scattered field, produced by the object. The NK scheme is used in order to deal with the nonlinearity and the ill-posed nature of this problem. A TM polarization and a two dimensional medium configuration have been considered in order to avoid its vectorial aspect. The solution is found iteratively by minimizing the square norm of the error with respect to the scattered field data. Consequently, the convergence of such iterative process requires, at least two conditions. First, an efficient calibration of the experimental system has to be associated to the minimization of model errors. Second, the mean square difference of the scattered field introduced by the presence of the tumor has to be large enough, according to the sensitivity of the imaging system. The existing planar camera associated to a flexible 2D scalar NK code, are considered as an experimental platform for quantitative breast imaging. A preliminary numerical study shows that the multi-view planar system is quite efficient for realistic breast tumor phantoms, according to its characteristics (frequency, planar geometry and water as a coupling medium), as long as realistic noisy data are considered. Furthermore, a multi-incidence planar system, more appropriate in term of antenna-array arrangement, is proposed and its concept is numerically validated. On the other hand, an experimental work which includes a new fluid-mixture for the realization of a narrow band cylindrical breast phantom, a deep investigation in the calibration process and model error minimization, is presented. This conducts to the first quantitative reconstruction of a realistic breast phantom by using multi-view data from the planar camera. Next, both the qualitative and quantitative reconstruction of 3D inclusions into the cylindrical breast phantom, by using data from all the retina, are shown and discussed. Finally, the extended work towards the flexible robotic system is presented.
A dissertation prepared through an international convention for a joint supervision thesis with Université Paris-SUD 11, France
Microwaves in biomedicine
31

Rückert, Nadja [Verfasser], Bernd [Akademischer Betreuer] Hofmann, Bernd [Gutachter] Hofmann, and Christine [Gutachter] Böckmann. "Studies on two specific inverse problems from imaging and finance / Nadja Rückert ; Gutachter: Bernd Hofmann, Christine Böckmann ; Betreuer: Bernd Hofmann." Chemnitz : Universitätsbibliothek Chemnitz, 2012. http://d-nb.info/1214244068/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gunnarsson, Tommy. "MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detection." Licentiate thesis, Västerås : Department of Computer Science and Electronics, Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bendjador, Hanna. "Correction d'aberrations et quantification de vitesse du son en imagerie ultrasonore ultrarapide." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLS011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie échographique repose sur la transmission de signaux ultrasonores à travers les tissus biologiques et l’analyse des échos rétro-diffusés. Donnant accès à des phénomènes physiologiques au-delà de 10 000 images par seconde, l’échographie ultrarapide a permis le développement de techniques inédites telles que l’imagerie de l’élasticité des organes ou la quantification ultrasensible des flux sanguins. Le front d’onde acoustique, lors de sa propagation à travers des milieux complexes ou hétérogènes peut toutefois subir de fortes déformations ; affectant tant la qualité de l’image, que les informations quantitatives sur le milieu. Corriger de telles aberrations est l’enjeu majeur des travaux de recherche effectués au cours de cette thèse. En étudiant les propriétés statistiques des interférences entre les diffuseurs, un formalisme mathématique a été développé pour optimiser la cohérence angulaire des signaux rétro-diffusés. Ainsi parvient- on, pour la première fois en temps réel, à corriger les images et quantifier localement la vitesse du son. Cette dernière constitue un bio-marqueur inédit dans les exemples de la stéatose hépatique, et possiblement de la séparation des substances blanche et grise du cerveau. La méthode de correction de phase proposée va également être un apport intéressant aux corrections de mouvement dans le cas de la tomographie 3D et de l’imagerie vasculaire, offrant de nouvelles perspectives à l’imagerie ultrasonore
Echography relies on the transmission of ultrasound signals through biological tissues, and the processing of backscattered echoes. The rise of ultrafast ultrasound imaging gave access to physiological events faster than 10 000 frames per second. It allowed therefore the development of high-end techniques such as organs elasticity imaging or sensitive quantification of blood flows. During its propagation through complex or heterogeneous media, the acoustic wavefront may still suffer strong distorsions; hindering both the image quality and the ensuing quantitative assessments. Correcting such aberrations is the ultimate goal of the research work conducted during this PhD. By studying statistical properties of interferences between scatterers, a matrix formalism has been developed to optimise the angular coherence of backscattered echoes. Importantly, we succeeded for the first time, in correcting images and quantifying locally the speed of sound at ultrafast frame rates. Sound speed was proven to be a unique biomarker in the example of hepatic steatosis, and possibly separation of brain white and black matter. The phase correction method will be an interesting contribution to motion correction in the case of 3D tomography and vascular imaging, offering thus new horizons to ultrasound imaging
34

Wörmann, Julian [Verfasser], Martin [Akademischer Betreuer] Kleinsteuber, Martin [Gutachter] Kleinsteuber, and Walter [Gutachter] Stechele. "Structured Co-sparse Analysis Operator Learning for Inverse Problems in Imaging / Julian Wörmann ; Gutachter: Martin Kleinsteuber, Walter Stechele ; Betreuer: Martin Kleinsteuber." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1205069437/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Salahieh, Basel, Jeffrey J. Rodriguez, Sean Stetson, and Rongguang Liang. "Single-image full-focus reconstruction using depth-based deconvolution." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2016. http://hdl.handle.net/10150/624372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In contrast with traditional extended depth-of-field approaches, we propose a depth-based deconvolution technique that realizes the depth-variant nature of the point spread function of an ordinary fixed-focus camera. The developed technique brings a single blurred image to focus at different depth planes which can be stitched together based on a depth map to output a full-focus image. Strategies to suppress the deconvolution's ringing artifacts are implemented on three levels: block tiling to eliminate boundary artifacts, reference maps to reduce ringing initiated by sharp edges, and depth-based masking to mitigate artifacts raised by neighboring depth-transition surfaces. The performance is validated numerically for planar and multidepth objects. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
36

Dekdouk, Bachir. "Image reconstruction of low conductivity material distribution using magnetic induction tomography." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/image-reconstruction-of-low-conductivity-material-distribution-using-magnetic-induction-tomography(44d6769d-59b1-44c2-a01e-835f8916f69c).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic induction tomography (MIT) is a non-invasive, soft field imaging modality that has the potential to map the electrical conductivity (σ) distribution inside an object under investigation. In MIT, a number of exciter and receiver coils are distributed around the periphery of the object. A primary magnetic field is emitted by each exciter, and interacts with the object. This induces eddy currents in the object, which in turn create a secondary field. This latter is coupled to the receiver coils and voltages are induced. An image reconstruction algorithm is then used to infer the conductivity map of the object. In this thesis, the application of MIT for volumetric imaging of objects with low conductivity materials (< 5 Sm-1) and dimensions < 1 m is investigated. In particular, two low conductivity applications are approached: imaging cerebral stroke and imaging the saline water in multiphase flows. In low conductivity applications, the measured signals are small and the spatial sensitivity is critically compromised making the associated inverse problem severely non-linear and ill-posed.The main contribution from this study is to investigate three non-linear optimisation techniques for solving the MIT inverse problem. The first two methods, namely regularised Levenberg Marquardt method and trust region Powell's Dog Leg method, employ damping and trust region strategies respectively. The third method is a modification of the Gauss Newton method and utilises a damping regularisation technique. An optimisation in the convergence and stability of the inverse solution was observed with these methods compared to standard Gauss Newton method. For such non linear treatment, re-evaluation of the forward problem is also required. The forward problem is solved numerically using the impedance method and a weakly coupled field approximation is employed to reduce the computation time and memory requirements. For treating the ill-posedness, different regularisation methods are investigated. Results show that the subspace regularisation technique is suitable for absolute imaging of the stroke in a real head model with synthetic data. Tikhonov based smoothing and edge preserving regularisation methods also produced successful results from simulations of oil/water. However, in a practical setup, still large geometrical and positioning noise causes a major problem and only difference imaging was viable to achieve a reasonable reconstruction.
37

Mom, Kannara. "Deep learning based phase retrieval for X-ray phase contrast imaging." Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le développement de sources de rayons X hautement cohérentes, telles que les installations de rayonnement synchrotron de troisième génération, a contribué de manière significative à l'avancement de l'imagerie à contraste de phase. Le haut degré de cohérence de ces sources permet une mise en œuvre efficace des techniques de contraste de phase et peut augmenter la sensibilité de plusieurs ordres de grandeur. Cette nouvelle technique d'imagerie a trouvé des applications dans un large éventail de domaines, notamment la science des matériaux, la paléontologie, la recherche sur les os, la médecine et la biologie. Elle permet l'imagerie d'échantillons à faible absorption, pour lesquels les méthodes traditionnelles basées sur l'absorption ne permettent pas d'obtenir un contraste suffisant. Plusieurs techniques d'imagerie sensibles à la phase ont été mises au point, dont l'imagerie basée sur la propagation, qui ne nécessite aucun équipement autre que la source, l'objet et le détecteur. Bien que l'intensité puisse être mesurée à une ou plusieurs distances de propagation, l'information sur la phase est perdue et doit être estimée à partir de ces figures de diffraction, un processus appelé récupération de phase. Dans ce contexte, la récupération de phase est un problème inverse non linéaire mal posé. Diverses méthodes classiques ont été proposées pour récupérer la phase, soit en linéarisant le problème pour obtenir une solution analytique, soit par des algorithmes itératifs. L'objectif principal de cette thèse était d'étudier ce que les nouvelles approches d'apprentissage profond pourraient apporter à ce problème de récupération de phase. Divers algorithmes d'apprentissage profond ont été proposés et évalués pour résoudre ce problème. Dans la première partie de ce travail, nous montrons comment les réseaux neuronaux peuvent être utilisés pour reconstruire directement à partir de données de mesure, sans information sur le modèle. L'architecture du réseau dense à échelle mixte (MS-D Net) est introduite, combinant la convolution dilatée et la connexion dense. Dans la deuxième partie de cette thèse, nous proposons un algorithme primal-dual non linéaire pour la récupération du déphasage et de l'absorption à partir d'un contraste de phase en ligne d'un seul rayon X. Nous avons montré que le choix de différents régularisateurs permettait d'obtenir des résultats satisfaisants. Nous avons montré que le choix de régularisateurs différents pour l'absorption et la phase peut améliorer les reconstructions. Dans la troisième partie, nous proposons d'intégrer les réseaux neuronaux dans un schéma d'optimisation existant en utilisant des approches dites de déroulement, afin de donner aux réseaux neuronaux convolutifs un rôle spécifique dans la reconstruction. Les performances de ces algorithmes sont évaluées en utilisant des données bruitées simulées ainsi que des images acquises au NanoMAX (MAX IV, Lund, Suède)
The development of highly coherent X-ray sources, such as third-generation synchrotron radiation facilities, has significantly contributed to the advancement of phase contrast imaging. The high degree of coherence of these sources enables efficient implementation of phase contrast techniques, and can increase sensitivity by several orders of magnitude. This novel imaging technique has found applications in a wide range of fields, including material science, paleontology, bone research, medicine, and biology. It enables the imaging of samples with low absorption constituents, where traditional absorption-based methods may fail to provide sufficient contrast. Several phase-sensitive imaging techniques have been developed, among them, propagation-based imaging requires no equipment other than the source, object and detector. Although the intensity can be measured at one or several propagation distances, the phase information is lost and must be estimated from those diffraction patterns, a process called phase retrieval. Phase retrieval in this context is a nonlinear ill-posed inverse problem. Various classical methods have been proposed to retrieve the phase, either by linearizing the problem to obtain an analytical solution, or by iterative algorithms. The main purpose of this thesis was to study what new deep learning approaches could bring to this phase retrieval problem. Various deep learning algorithms have been proposed and evaluated to address this problem. In the first part of this work, we show how neural networks can be used to reconstruct directly from measurements data, without model information. The architecture of the Mixed Scale Dense Network (MS-D Net) is introduced, combining dilated convolution and dense connection. In the second part of this thesis, we propose a nonlinear primal–dual algorithm for the retrieval of phase shift and absorption from a single X-ray in-line phase contrast. We showed that choosing different regularizers for absorption and phase can improve the reconstructions. In the third part, we propose to integrate neural networks into an existing optimization scheme using so-called unrolling approaches, in order to give the convolutional neural networks a specific role in the reconstruction. The performance of theses algorithms are evaluated using simulated noisy data as well as images acquired at NanoMAX (MAX IV, Lund, Sweden)
38

Ygouf, Marie. "Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00843202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail de thèse porte sur l'imagerie multispectrale à haut contraste pour la détection et la caractérisation directe d'exoplanètes. Dans ce contexte, le développement de méthodes innovantes de traitement d'images est indispensable afin d'éliminer les tavelures quasi-statiques dans l'image finale qui restent à ce jour, la principale limitation pour le haut contraste. Bien que les aberrations résiduelles instrumentales soient à l'origine de ces tavelures, aucune méthode de réduction de données n'utilise de modèle de formation d'image coronographique qui prend ces aberrations comme paramètres. L'approche adoptée dans cette thèse comprend le développement, dans un cadre bayésien, d'une méthode d'inversion fondée sur un modèle analytique d'imagerie coronographique. Cette méthode estime conjointement les aberrations instrumentales et l'objet d'intérêt, à savoir les exoplanètes, afin de séparer correctement ces deux contributions. L'étape d'estimation des aberrations à partir des images plan focal (ou phase retrieval en anglais), est la plus difficile car le modèle de réponse instrumentale sur l'axe dont elle dépend est fortement non-linéaire. Le développement et l'étude d'un modèle approché d'imagerie coronographique plus simple se sont donc révélés très utiles pour la compréhension du problème et m'ont inspiré des stratégies de minimisation. J'ai finalement pu tester ma méthode et d'estimer ses performances en terme de robustesse et de détection d'exoplanètes. Pour cela, je l'ai appliquée sur des images simulées et j'ai notamment étudié l'effet des différents paramètres du modèle d'imagerie utilisé. J'ai ainsi démontré que cette nouvelle méthode, associée à un schéma d'optimisation fondé sur une bonne connaissance du problème, peut fonctionner de manière relativement robuste, en dépit des difficultés de l'étape de phase retrieval. En particulier, elle permet de détecter des exoplanètes dans le cas d'images simulées avec un niveau de détection conforme à l'objectif de l'instrument SPHERE. Ce travail débouche sur de nombreuses perspectives dont celle de démontrer l'utilité de cette méthode sur des images simulées avec des coronographes plus réalistes et sur des images réelles de l'instrument SPHERE. De plus, l'extension de la méthode pour la caractérisation des exoplanètes est relativement aisée, tout comme son extension à l'étude d'objets plus étendus tels que les disques circumstellaire. Enfin, les résultats de ces études apporteront des enseignements importants pour le développement des futurs instruments. En particulier, les Extremely Large Telescopes soulèvent d'ores et déjà des défis techniques pour la nouvelle génération d'imageurs de planètes. Ces challenges pourront très probablement être relevés en partie grâce à des méthodes de traitement d'image fondées sur un modèle direct d'imagerie.
39

Zahran, Saeed. "Source localization and connectivity analysis of uterine activity." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La technique d'EHGI permet une reconstruction non invasive du potentiel électrique à la surface de l’utérus à partir du potentiel électrique mesuré à la surface du corps et des données anatomiques du torse. L’EHGI fournit des informations très précieuses sur l'état de l’utérus car il est capable de fournir une description spatiale raffinée de la voie et de l’amplitude des ondes électriques à la surface de l'utérus. Cela peut aider beaucoup dans différentes interventions cliniques. Les algorithmes scientifiques derrière tout outil EHGI sont capables de prétraiter les données anatomiques du patient afin de fournir un maillage informatique, de filtrer les mesures bruitées du potentiel électrique et de résoudre un problème inverse. Le problème inverse en électrohystérographie utérine (imagerie électrohystérographique (EHGI)) est une technique de diagnostic nouvelle et puissante. Cette technologie non invasive intéresse de plus en plus les industries médicales. Le succès de cette technologie serait considéré comme une percée dans le diagnostic de l'utérus. Cependant, dans de nombreux cas, la qualité du potentiel électrique reconstruit n’est pas suffisamment précise. La difficulté vient du fait que le problème inverse en électrohystérographie utérine est bien connu comme un problème mathématiquement mal posé. Différentes méthodes basées sur la régularisation de Thikhnov ont été utilisées afin de régulariser le problème. Nous avons mené notre analyse en utilisant un modèle d’utérus réaliste et avons cherché à identifier l’étendue spatiale des sources
The technique of EHGI allows a noninvasive reconstruction of the electrical potential on the uterus surface based on electrical potential measured on the body surface and anatomical data of the torso. EHGI provides very precious information about the uterus condition since it is able to provide refined spatial description of the electrical wave pathway and magnitude on the uterus surface. This may help a lot in different clinical interventions. The scientific algorithms behind any EHGI tool are able to preprocess the anatomical data of the patient in order to provide a computational mesh, filter noisy measurements of the electrical potential and solve an inverse problem. The inverse problem in uterus electrohysterography (electrohysterography imaging (EHGI)) is a new and a powerful diagnosis technique. This non-invasive technology interests more and more medical industries. The success of this technology would be considered as a breakthrough in the uterus diagnosis. However, in many cases the quality of reconstructed electrical potential is not accurate enough. The difficulty comes from the fact that the inverse problem in uterus electrohysterography is well known as a mathematically ill-posed problem. Different methods based on Thikhnov regularization have been used in order to regularize the problem. We have conducted our analysis by using a realistic uterus model and have aimed at identifying the spatial extent of the sources
40

Shilling, Richard Zethward. "A multi-stack framework in magnetic resonance imaging." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic resonance imaging (MRI) is the preferred imaging modality for visualization of intracranial soft tissues. Surgical planning, and increasingly surgical navigation, use high resolution 3-D patient-specific structural maps of the brain. However, the process of MRI is a multi-parameter tomographic technique where high resolution imagery competes against high contrast and reasonable acquisition times. Resolution enhancement techniques based on super-resolution are particularly well suited in solving the problems of resolution when high contrast with reasonable times for MRI acquisitions are needed. Super-resolution is the concept of reconstructing a high resolution image from a set of low-resolution images taken at dierent viewpoints or foci. The MRI encoding techniques that produce high resolution imagery are often sub-optimal for the desired contrast needed for visualization of some structures in the brain. A novel super-resolution reconstruction framework for MRI is proposed in this thesis. Its purpose is to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery. The input data are multiple 2-D multi-slice Inversion Recovery MRI scans acquired at orientations with regular angular spacing rotated around a common axis. Inspired by the computed tomography domain, the reconstruction is a 3-D volume of isotropic high resolution, where the inversion process resembles a projection reconstruction problem. Iterative algorithms for reconstruction are based on the projection onto convex sets formalism. Results demonstrate resolution enhancement in simulated phantom studies, and in ex- and in-vivo human brain scans, carried out on clinical scanners. In addition, a novel motion correction method is applied to volume registration using an iterative technique in which super-resolution reconstruction is estimated in a given iteration following motion correction in the preceding iteration. A comparison study of our method with previously published methods in super-resolution shows favorable characteristics of the proposed approach.
41

Abi, rizk Ralph. "High-resolution hyperspectral reconstruction by inversion of integral field spectroscopy measurements. Application to the MIRI-MRS infrared spectrometer of the James Webb Space Telescope." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse traite des approches de type problème inverse pour reconstruire une image 3D spatio-spectrale (2D+λ) à partir d’un ensemble de mesures infrarouges 2D fournies par l’instrument “Integral Field Spectrometer” (IFS) (Mid-ResolutionSpectrometer: MRS) de l’instrument “Mid-Infrared” à bord du “James Webb Space Telescope” (JWST). Plusieurs difficultées se posent lors de la reconstruction car l’instrument IFS contient des composantes complexes qui dégradent et modifient les mesures: (1) les réponses des composantes ne sont pas parfaites et introduisent un flou spatial et spectral aux mesures qui dépendent de la longueur d’onde, (2) l’instrument considère plusieurs observations avec plusieurs champs de vue (comme les canaux spectraux et les fentes parallèles), (3) les sorties de l’instrument sont projetées sur plusieurs détecteurs 2D et échantillonnées avec des pas d’échantillonnage hétérogènes.La reconstruction d’image 2D+λ est un problème mal posé principalement en raison du flou spatio-spectral et de l’échantillonnage spatial insuffisant. Pour compenser la perte d’informations spatiales, le MRS permet des observations multiples de la même scène d’entrée en décalant le pointage du télescope, conduisant à un problème de Super-Résolution (SR). Nous proposons un algorithme de reconstruction qui traite conjointement les informations spatiales et spectrales des mesures 2D suivant deux étapes. Tout d’abord, nous concevons un modèle direct qui décrit la réponse des composantes de l’instrument IFS comme une série d’opérateurs mathématiques et qui établit une relation entre les mesures et l’image 2D+λ d’entrée qu’on cherche à reconstruire. Ensuite, le modèle direct est utilisé pour reconstruire l’image 2D+λ en s’appuyant sur l’approche des moindres carrés régularisée avec une régularisation convexe pour la préservation des contours. Nous nous appuyons sur les approches semi quadratiques rapides basées sur la formulation de Geman et Reynolds pour résoudre le problème. L’algorithme de reconstruction proposé comprend principalement une étape de fusion des mesures issues de différentes observations spatio-spectrales avec différents flous et différents échantillonnages, une étape de SR à partir des différents pointages de l’instrument, et une étape de déconvolution pour minimiser le flou. Un autre modèle direct pour le même instrument est également développé dans notre travail, en supposant que l’image 2D+λ d’entrée vit dans un sous-espace de faible dimension et peut être modélisée comme une combinaison linéaire de composantes spectrales, supposées connues, pondérées par des coefficients de mélange inconnus. Nous nous appuyons ensuite sur l’algorithme d’optimisation Majorize-Minimize Memory Gradient (3MG) pour estimer les coefficients de mélange inconnus. L’approximation par sous-espace réduit le nombre d’inconnues. Par conséquent, le rapport signal sur bruit augmente. De plus, le modèle de mélange de source avec des composantes spectrales connues permet de conserver l’information spectrale complexe de l’image 2D+λ reconstruite. La reconstruction proposée est testée sur plusieurs images 2D+λ synthétiques ayant des différentes distributions spatiales et spectrales. Notre reconstruction montre une déconvolution nette et une amélioration significative des résolutions spatiales et spectrales des images 2D+λ reconstruites par rapport aux algorithmes de l’état de l’art, notamment autour des bords
This thesis deals with inverse problem approaches to reconstruct a 3D spatio-spectral image from a set of 2D infrared measurements provided by the Integral Field Spectrometer (IFS) instrument (Mid-Resolution Spectrometer: MRS) of the Mid-Infrared Instrument onboard the James Webb Space Telescope. The reconstruction is challenging because the IFS involves complex components that degrade the measurements: (1) the responses of the components are not perfect and introduce a wavelength-dependent spatial and spectral blurring, (2) the instrument considers several observations of the input with several spatial and spectral fields of views, (3) the output measurements are projected onto multiple 2D detectors and sampled with heterogeneous step sizes. The 3D image reconstruction is an ill-posed problem mainly due to spatio-spectral blurring and insufficient spatial sampling. To compensate for the loss of spatial information, the MRS allows multiple observations of the same scene by shifting the telescope pointing, leading to a multi-frame Super-Resolution (SR) problem. We propose an SR reconstruction algorithm that jointly processes the spatial and spectral information of the degraded 2D measurements following two main steps. First, we design a forward model that describes the response of the IFS instrument as a series of mathematical operators and establishes a relationship between the measurements and the unknown 3D input image. Next, the forward model is used to reconstruct the unknown input.The reconstruction is based on the regularized least square approach with a convex regularization for edge-preserving. We rely on the fast half-quadratic approaches based on Geman and Reynolds formulation to solve the problem. The proposed algorithm mainly includes a fusion step of measurements from different spatio-spectral observations with different blur and different sampling, a multi-frame Super-Resolution step from the different pointing of the instrument, and a deconvolution step to minimize the blurring. Another forward model for the same instrument is also developed in our work, by assuming that the 3D input image lives in a low dimensional subspace and can be modeled as a linear combination of spectral components, assumed known, weighted by unknown mixing coefficients, known as the Linear Mixing Model (LMM). We then rely on the Majorize-Minimize Memory Gradient (3MG) optimization algorithm to estimate the unknown mixing coefficients. The subspace approximation reduces the number of the unknowns. Consequently, the signal-to-noise ratio is increased. In addition, the LMM formulation with known spectral components allows preserving the complex spectral information of the reconstructed 3D image. The proposed reconstruction is tested on several synthetic HS images with different spatial and spectral distributions. Our algorithm shows a clear deconvolution and a significant improvement of the spatial and spectral resolutions of the reconstructed images compared to the state-of-art algorithms, particularly around the edges
42

Nilsson, Lovisa. "Data-Driven Methods for Sonar Imaging." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Reconstruction of sonar images is an inverse problem, which is normally solved with model-based methods. These methods may introduce undesired artifacts called angular and range leakage into the reconstruction. In this thesis, a method called Learned Primal-Dual Reconstruction, which combines a data-driven and a model-based approach, is used to investigate the use of data-driven methods for reconstruction within sonar imaging. The method uses primal and dual variables inspired by classical optimization methods where parts are replaced by convolutional neural networks to iteratively find a solution to the reconstruction problem. The network is trained and validated with synthetic data on eight models with different architectures and training parameters. The models are evaluated on measurement data and the results are compared with those from a purely model-based method. Reconstructions performed on synthetic data, where a ground truth image is available, show that it is possible to achieve reconstructions with the data-driven method that have less leakage than reconstructions from the model-based method. For reconstructions performed on measurement data where no ground truth is available, some variants of the learned model achieve a good result with less leakage.
43

Irakarama, Modeste. "Towards Reducing Structural Interpretation Uncertainties Using Seismic Data." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0060/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les modèles géologiques sont couramment utilisés pour estimer les ressources souterraines, pour faire des simulations numériques, et pour évaluer les risques naturels ; il est donc important que les modèles géologiques représentent la géométrie des objets géologiques de façon précise. La première étape pour construire un modèle géologique consiste souvent à interpréter des surfaces structurales, telles que les failles et horizons, à partir d'une image sismique ; les objets géologiques identifiés sont ensuite utilisés pour construire le modèle géologique par des méthodes d'interpolation. Les modèles géologiques construits de cette façon héritent donc les incertitudes d'interprétation car une image sismique peut souvent supporter plusieurs interprétations structurales. Dans ce manuscrit, j'étudie le problème de réduire les incertitudes d'interprétation à l'aide des données sismiques. Particulièrement, j'étudie le problème de déterminer, à l'aide des données sismiques, quels modèles sont plus probables que d'autres dans un ensemble des modèles géologiques cohérents. Ce problème sera connu par la suite comme "le problème d'évaluation des modèles géologiques par données sismiques". J'introduis et formalise ce problème. Je propose de le résoudre par génération des données sismiques synthétiques pour chaque interprétation structurale dans un premier temps, ensuite d'utiliser ces données synthétiques pour calculer la fonction-objectif pour chaque interprétation ; cela permet de classer les différentes interprétations structurales. La difficulté majeure d'évaluer les modèles structuraux à l'aide des données sismiques consiste à proposer des fonctions-objectifs adéquates. Je propose un ensemble de conditions qui doivent être satisfaites par la fonction-objectif pour une évaluation réussie des modèles structuraux à l'aide des données sismiques. Ces conditions imposées à la fonction-objectif peuvent, en principe, être satisfaites en utilisant les données sismiques de surface (« surface seismic data »). Cependant, en pratique il reste tout de même difficile de proposer et de calculer des fonctions-objectifs qui satisfassent ces conditions. Je termine le manuscrit en illustrant les difficultés rencontrées en pratique lorsque nous cherchons à évaluer les interprétations structurales à l'aide des données sismiques de surface. Je propose une fonction-objectif générale faite de deux composants principaux : (1) un opérateur de résidus qui calcule les résidus des données, et (2) un opérateur de projection qui projette les résidus de données depuis l'espace de données vers l'espace physique (le sous-sol). Cette fonction-objectif est donc localisée dans l'espace car elle génère des valeurs en fonction de l'espace. Cependant, je ne suis toujours pas en mesure de proposer une implémentation pratique de cette fonction-objectif qui satisfasse les conditions imposées pour une évaluation réussie des interprétations structurales ; cela reste un sujet de recherche
Subsurface structural models are routinely used for resource estimation, numerical simulations, and risk management; it is therefore important that subsurface models represent the geometry of geological objects accurately. The first step in building a subsurface model is usually to interpret structural features, such as faults and horizons, from a seismic image; the identified structural features are then used to build a subsurface model using interpolation methods. Subsurface models built this way therefore inherit interpretation uncertainties since a single seismic image often supports multiple structural interpretations. In this manuscript, I study the problem of reducing interpretation uncertainties using seismic data. In particular, I study the problem of using seismic data to determine which structural models are more likely than others in an ensemble of geologically plausible structural models. I refer to this problem as "appraising structural models using seismic data". I introduce and formalize the problem of appraising structural interpretations using seismic data. I propose to solve the problem by generating synthetic data for each structural interpretation and then to compute misfit values for each interpretation; this allows us to rank the different structural interpretations. The main challenge of appraising structural models using seismic data is to propose appropriate data misfit functions. I derive a set of conditions that have to be satisfied by the data misfit function for a successful appraisal of structural models. I argue that since it is not possible to satisfy these conditions using vertical seismic profile (VSP) data, it is not possible to appraise structural interpretations using VSP data in the most general case. The conditions imposed on the data misfit function can in principle be satisfied for surface seismic data. In practice, however, it remains a challenge to propose and compute data misfit functions that satisfy those conditions. I conclude the manuscript by highlighting practical issues of appraising structural interpretations using surface seismic data. I propose a general data misfit function that is made of two main components: (1) a residual operator that computes data residuals, and (2) a projection operator that projects the data residuals from the data-space into the image-domain. This misfit function is therefore localized in space, as it outputs data misfit values in the image-domain. However, I am still unable to propose a practical implementation of this misfit function that satisfies the conditions imposed for a successful appraisal of structural interpretations; this is a subject for further research
44

Momey, Fabien. "Reconstruction en tomographie dynamique par approche inverse sans compensation de mouvement." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-00842572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tomographie est la discipline qui cherche à reconstruire une donnée physique dans son volume, à partir de l'information indirecte de projections intégrées de l'objet, à différents angles de vue. L'une de ses applications les plus répandues, et qui constitue le cadre de cette thèse, est l'imagerie scanner par rayons X pour le médical. Or, les mouvements inhérents à tout être vivant, typiquement le mouvement respiratoire et les battements cardiaques, posent de sérieux problèmes dans une reconstruction classique. Il est donc impératif d'en tenir compte, i.e. de reconstruire le sujet imagé comme une séquence spatio-temporelle traduisant son "évolution anatomique" au cours du temps : c'est la tomographie dynamique. Élaborer une méthode de reconstruction spécifique à ce problème est un enjeu majeur en radiothérapie, où la localisation précise de la tumeur dans le temps est un prérequis afin d'irradier les cellules cancéreuses en protégeant au mieux les tissus sains environnants. Des méthodes usuelles de reconstruction augmentent le nombre de projections acquises, permettant des reconstructions indépendantes de plusieurs phases de la séquence échantillonnée en temps. D'autres compensent directement le mouvement dans la reconstruction, en modélisant ce dernier comme un champ de déformation, estimé à partir d'un jeu de données d'acquisition antérieur. Nous proposons dans ce travail de thèse une approche nouvelle ; se basant sur la théorie des problèmes inverses, nous affranchissons la reconstruction dynamique du besoin d'accroissement de la quantité de données, ainsi que de la recherche explicite du mouvement, elle aussi consommatrice d'un surplus d'information. Nous reconstruisons la séquence dynamique à partir du seul jeu de projections courant, avec pour seules hypothèses a priori la continuité et la périodicité du mouvement. Le problème inverse est alors traité rigoureusement comme la minimisation d'un terme d'attache aux données et d'une régularisation. Nos contributions portent sur la mise au point d'une méthode de reconstruction adaptée à l'extraction optimale de l'information compte tenu de la parcimonie des données -- un aspect typique du problème dynamique -- en utilisant notamment la variation totale (TV) comme régularisation. Nous élaborons un nouveau modèle de projection tomographique précis et compétitif en temps de calcul, basé sur des fonctions B-splines séparables, permettant de repousser encore la limite de reconstruction imposée par la parcimonie. Ces développements sont ensuite insérés dans un schéma de reconstruction dynamique cohérent, appliquant notamment une régularisation TV spatio-temporelle efficace. Notre méthode exploite ainsi de façon optimale la seule information courante à disposition ; de plus sa mise en oeuvre fait preuve d'une grande simplicité. Nous faisons premièrement la démonstration de la force de notre approche sur des reconstructions 2-D+t à partir de données simulées numériquement. La faisabilité pratique de notre méthode est ensuite établie sur des reconstructions 2-D et 3-D+t à partir de données physiques "réelles", acquises sur un fantôme mécanique et sur un patient
45

Diouane, Youssef. "Globally convergent evolution strategies with application to Earth imaging problem in geophysics." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/12202/1/Diouane.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, there has been significant and growing interest in Derivative-Free Optimization (DFO). This field can be divided into two categories: deterministic and stochastic. Despite addressing the same problem domain, only few interactions between the two DFO categories were established in the existing literature. In this thesis, we attempt to bridge this gap by showing how ideas from deterministic DFO can improve the efficiency and the rigorousness of one of the most successful class of stochastic algorithms, known as Evolution Strategies (ES’s). We propose to equip a class of ES’s with known techniques from deterministic DFO. The modified ES’s achieve rigorously a form of global convergence under reasonable assumptions. By global convergence, we mean convergence to first-order stationary points independently of the starting point. The modified ES’s are extended to handle general constrained optimization problems. Furthermore, we show how to significantly improve the numerical performance of ES’s by incorporating a search step at the beginning of each iteration. In this step, we build a quadratic model using the points where the objective function has been previously evaluated. Motivated by the recent growth of high performance computing resources and the parallel nature of ES’s, an application of our modified ES’s to Earth imaging Geophysics problem is proposed. The obtained results provide a great improvement for the problem resolution.
46

Seifi, Mozhdeh. "Signal processing methods for fast and accurate reconstruction of digital holograms." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-01004605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Techniques for fast, 3D, quantitative microscopy are of great interest in many fields. In this context, in-line digital holography has significant potential due to its relatively simple setup (lensless imaging), its three-dimensional character and its temporal resolution. The goal of this thesis is to improve existing hologram reconstruction techniques by employing an "inverse problems" approach. For applications of objects with parametric shapes, a greedy algorithm has been previously proposed which solves the (inherently ill-posed) inversion problem of reconstruction by maximizing the likelihood between a model of holographic patterns and the measured data. The first contribution of this thesis is to reduce the computational costs of this algorithm using a multi-resolution approach (FAST algorithm). For the second contribution, a "matching pursuit" type of pattern recognition approach is proposed for hologram reconstruction of volumes containing parametric objects, or non-parametric objects of a few shape classes. This method finds the closest set of diffraction patterns to the measured data using a diffraction pattern dictionary. The size of the dictionary is reduced by employing a truncated singular value decomposition to obtain a low cost algorithm. The third contribution of this thesis was carried out in collaboration with the laboratory of fluid mechanics and acoustics of Lyon (LMFA). The greedy algorithm is used in a real application: the reconstruction and tracking of free-falling, evaporating, ether droplets. In all the proposed methods, special attention has been paid to improvement of the accuracy of reconstruction as well as to reducing the computational costs and the number of parameters to be tuned by the user (so that the proposed algorithms are used with little or no supervision). A Matlab® toolbox (accessible on-line) has been developed as part of this thesis
47

Cantalloube, Faustine. "Détection et caractérisation d'exoplanètes dans des images à grand contraste par la résolution de problème inverse." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAY017/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie d’exoplanètes permet d’obtenir de nombreuses informations sur la lumière qu’elles émettent, l’interaction avec leur environnement et sur leur nature. Afin d’extraire l’information des images, il est indispensable d’appliquer des méthodes de traitement d’images adaptées aux instruments. En particulier, il faut séparer les signaux planétaires des tavelures présentes dans les images qui sont dues aux aberrations instrumentales quasi-statiques. Dans mon travail de thèse je me suis intéressée à deux méthodes innovantes de traitement d’images qui sont fondés sur la résolution de problèmes inverses.La première méthode, ANDROMEDA, est un algorithme dédié à la détection et à la caractérisation de point sources dans des images haut contraste via une approche maximum de vraisemblance. ANDROMEDA exploite la diversité temporelle apportée par la rotation de champ de l’image (où se trouvent les objets astrophysiques) alors que la pupille (où les aberrations prennent naissance) est gardée fixe. A partir de l’application sur données réelles de l’algorithme dans sa version originale, j’ai proposé et qualifié des améliorations afin de prendre en compte les résidus non modélisés par la méthode tels que les structures bas ordres variant lentement et le niveau résiduel de bruit correlé dans les données. Une fois l’algorithme ANDROMEDA opérationnel, j’ai analysé ses performances et sa sensibilité aux paramètres utilisateurs, montrant la robustesse de la méthode. Une comparaison détaillée avec les algorithmes les plus utilisés dans la communauté a prouvé que cet algorithme est compétitif avec des performances très intéressantes dans le contexte actuel. En particulier, il s’agit de la seule méthode qui permet une détection entièrement non-supervisée. De plus, l’application à de nombreuses données ciel venant d’instruments différents a prouvé la fiabilité de la méthode et l’efficacité à extraire rapidement et systématiquement (avec un seul paramètre utilisateur à ajuster) les informations contenues dans les images. Ces applications ont aussi permis d’ouvrir des perspectives pour adapter cet outil aux grands enjeux actuels de l’imagerie d’exoplanètes.La seconde méthode, MEDUSAE, consiste à estimer conjointement les aberrations et les objets d’intérêt scientifique, en s’appuyant sur un modèle de formation d’images coronographiques. MEDUSAE exploite la redondance d’informations apportée par des images multi-spectrales. Afin de raffiner la stratégie d’inversion de la méthode et d’identifier les paramètres les plus critiques, j’ai appliqué l’algorithme sur des données générées avec le modèle utilisé dans l’inversion. J’ai ensuite appliqué cette méthode à des données simulées plus réalistes afin d’étudier l’impact de la différence entre le modèle utilisé dans l’inversion et les données réelles. Enfin, j’ai appliqué la méthode à des données réelles et les résultats préliminaires que j’ai obtenus ont permis d’identifier les informations importantes dont la méthode a besoin et ainsi de proposer plusieurs pistes de travail qui permettraient de rendre cet algorithme opérationnel sur données réelles
Direct imaging of exoplanets provides valuable information about the light they emit, their interactions with their host star environment and their nature. In order to image such objects, advanced data processing tools adapted to the instrument are needed. In particular, the presence of quasi-static speckles in the images, due to optical aberrations distorting the light from the observed star, prevents planetary signals from being distinguished. In this thesis, I present two innovative image processing methods, both based on an inverse problem approach, enabling the disentanglement of the quasi-static speckles from the planetary signals. My work consisted of improving these two algorithms in order to be able to process on-sky images.The first one, called ANDROMEDA, is an algorithm dedicated to point source detection and characterization via a maximum likelihood approach. ANDROMEDA makes use of the temporal diversity provided by the image field rotation during the observation, to recognize the deterministic signature of a rotating companion over the stellar halo. From application of the original version on real data, I have proposed and qualified improvements in order to deal with the non-stable large scale structures due to the adaptative optics residuals and with the remaining level of correlated noise in the data. Once ANDROMEDA became operational on real data, I analyzed its performance and its sensitivity to the user-parameters proving the robustness of the algorithm. I also conducted a detailed comparison to the other algorithms widely used by the exoplanet imaging community today showing that ANDROMEDA is a competitive method with practical advantages. In particular, it is the only method that allows a fully unsupervised detection. By the numerous tests performed on different data set, ANDROMEDA proved its reliability and efficiency to extract companions in a rapid and systematic way (with only one user parameter to be tuned). From these applications, I identified several perspectives whose implementation could significantly improve the performance of the pipeline.The second algorithm, called MEDUSAE, consists in jointly estimating the aberrations (responsible for the speckle field) and the circumstellar objects by relying on a coronagraphic image formation model. MEDUSAE exploits the spectral diversity provided by multispectral data. In order to In order to refine the inversion strategy and probe the most critical parameters, I applied MEDUSAE on a simulated data set generated with the model used in the inversion. To investigate further the impact of the discrepancy between the image model used and the real images, I applied the method on realistic simulated images. At last, I applied MEDUSAE on real data and from the preliminary results obtained, I identified the important input required by the method and proposed leads that could be followed to make this algorithm operational to process on-sky data
48

Ion, Valentina. "Nonlinear approaches for phase retrieval in the Fresnel region for hard X-ray imaging." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-01015814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development of highly coherent X-ray sources offers new possibilities to image biological structures at different scales exploiting the refraction of X-rays. The coherence properties of the third-generation synchrotron radiation sources enables efficient implementations of phase contrast techniques. One of the first measurements of the intensity variations due to phase contrast has been reported in 1995 at the European Synchrotron Radiation Facility (ESRF). Phase imaging coupled to tomography acquisition allows threedimensional imaging with an increased sensitivity compared to absorption CT. This technique is particularly attractive to image samples with low absorption constituents. Phase contrast has many applications, ranging from material science, paleontology, bone research to medicine and biology. Several methods to achieve X-ray phase contrast have been proposed during the last years. In propagation based phase contrast, the measurements are made at different sample-to-detector distances. While the intensity data can be acquired and recorded, the phase information of the signal has to be "retrieved" from the modulus data only. Phase retrieval is thus an illposed nonlinear problem and regularization techniques including a priori knowledge are necessary to obtain stable solutions. Several phase recovery methods have been developed in recent years. These approaches generally formulate the phase retrieval problem as a linear one. Nonlinear treatments have not been much investigated. The main purpose of this work was to propose and evaluate new algorithms, in particularly taking into account the nonlinearity of the direct problem. In the first part of this work, we present a Landweber type nonlinear iterative scheme to solve the propagation based phase retrieval problem. This approach uses the analytic expression of the Fréchet derivative of the phase-intensity relationship and of its adjoint, which are presented in detail. We also study the effect of projection operators on the convergence properties of the method. In the second part of this thesis, we investigate the resolution of the linear inverse problem with an iterative thresholding algorithm in wavelet coordinates. In the following, the two former algorithms are combined and compared with another nonlinear approach based on sparsity regularization and a fixed point algorithm. The performance of theses algorithms are evaluated on simulated data for different noise levels. Finally the algorithms were adapted to process real data sets obtained in phase CT at the ESRF at Grenoble.
49

Seppecher, Laurent. "Modélisation de l'imagerie biomédicale hybride par perturbations mécaniques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01021279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse, nous introduisons et développons une approche mathématiques originale des techniques d'imagerie biomédicales dites "hybrides". L'idée et d'appliquer une méthode d'imagerie mal posée, tout en perturbant le milieu à imager par des déplacements mécaniques. Ces déplacements provenant d'une équation de type onde élastique perturbent les mesures effectuées. En utilisant ces mesures perturbées, et profitant du caractère local des perturbations mécaniques, il est possible d'augmenter considérablement la résolution de la méthode de base. Le problème direct est donc un couplage d'une EDP décrivant la propagation utilisée pour la méthode de base et d'une seconde décrivant les champs de déplacement mécaniques. Dans toutes cette thèse, on fait l'hypothèse d'un milieu mécaniquement homogène afin d'assurer le contrôle et la géométrie des ondes perturbatrices utilisées. A partir des mesures perturbées, une étape d'interprétation permet de construire une donnée interne au domaine considéré. Cette étape nécessite en général l'inversion d'opérateurs géométriques intégraux de type Radon, afin d'utiliser le caractère localisant des perturbations utilisées. A partir de cette donnée interne, il est possible d'initier une procédure de reconstruction du paramètre physique recherché. Dans le chapitre 1, il est question d'un couplage entre micro-ondes et perturbations sphériques. Dans les chapitres 2, 3 et 4, nous étudions l'imagerie optique diffuse toujours couplée avec des perturbations sphériques. Enfin dans le chapitre cinq, nous donnons une méthode originale de reconstruction de la conductivité électrique par un couplage entre champs magnétique et perturbations acoustiques focalisées.
50

Hadj-Youcef, Mohamed Elamine. "Spatio spectral reconstruction from low resolution multispectral data : application to the Mid-Infrared instrument of the James Webb Space Telescope." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS326/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse traite un problème inverse en astronomie. L’objectif est de reconstruire un objet 2D+λ, ayant une distribution spatiale et spectrale, à partir d’un ensemble de données multispectrales de basse résolution fournies par l’imageur MIRI (Mid-InfraRed Instrument), qui est à bord du prochain télescope spatial James Webb Space Telescope (JWST). Les données multispectrales observées souffrent d’un flou spatial qui dépend de la longueur d’onde. Cet effet est dû à la convolution par la réponse optique (PSF). De plus, les données multi-spectrales souffrent également d’une sévère dégradation spectrale en raison du filtrage spectral et de l’intégration par le détecteur sur de larges bandes. La reconstruction de l’objet original est un problème mal posé en raison du manque important d’informations spectrales dans l’ensemble de données multispectrales. La difficulté se pose alors dans le choix d’une représentation de l’objet permettant la reconstruction de l’information spectrale. Un modèle classique utilisé jusqu’à présent considère une PSF invariante spectralement par bande, ce qui néglige la variation spectrale de la PSF. Cependant, ce modèle simpliste convient que dans le cas d’instrument à une bande spectrale très étroite, ce qui n’est pas le cas pour l’imageur de MIRI. Notre approche consiste à développer une méthode pour l’inversion qui se résume en quatre étapes : (1) concevoir un modèle de l’instrument reproduisant les données multispectrales observées, (2) proposer un modèle adapté pour représenter l’objet à reconstruire, (3) exploiter conjointement l’ensemble des données multispectrales, et enfin (4) développer une méthode de reconstruction basée sur la régularisation en introduisant des priori à la solution. Les résultats de reconstruction d’objets spatio-spectral à partir de neuf images multispectrales simulées de l’imageur de MIRI montrent une augmentation significative des résolutions spatiale et spectrale de l’objet par rapport à des méthodes conventionnelles. L’objet reconstruit montre l’effet de débruitage et de déconvolution des données multispectrales. Nous avons obtenu une erreur relative n’excédant pas 5% à 30 dB et un temps d’exécution de 1 seconde pour l’algorithme de norm-l₂ et 20 secondes avec 50 itérations pour l’algorithme norm-l₂/l₁. C’est 10 fois plus rapide que la solution itérative calculée par l’algorithme de gradient conjugué
This thesis deals with an inverse problem in astronomy. The objective is to reconstruct a spatio-spectral object, having spatial and spectral distributions, from a set of low-resolution multispectral data taken by the imager MIRI (Mid-InfraRed Instrument), which is on board the next space telescope James Webb Space Telescope (JWST). The observed multispectral data suffers from a spatial blur that varies according to the wavelength due to the spatial convolution with a shift-variant optical response (PSF). In addition the multispectral data also suffers from severe spectral degradations because of the spectral filtering and the integration by the detector over broad bands. The reconstruction of the original object is an ill-posed problem because of the severe lack of spectral information in the multispectral dataset. The difficulty then arises in choosing a representation of the object that allows the reconstruction of this spectral information. A common model used so far considers a spectral shift-invariant PSF per band, which neglects the spectral variation of the PSF. This simplistic model is only suitable for instruments with a narrow spectral band, which is not the case for the imager of MIRI. Our approach consists of developing an inverse problem framework that is summarized in four steps: (1) designing an instrument model that reproduces the observed multispectral data, (2) proposing an adapted model to represent the sought object, (3) exploiting all multispectral dataset jointly, and finally (4) developing a reconstruction method based on regularization methods by enforcing prior information to the solution. The overall reconstruction results obtained on simulated data of the JWST/MIRI imager show a significant increase of spatial and spectral resolutions of the reconstructed object compared to conventional methods. The reconstructed object shows a clear denoising and deconvolution of the multispectral data. We obtained a relative error below 5% at 30 dB, and an execution time of 1 second for the l₂-norm algorithm and 20 seconds (with 50 iterations) for the l₂/l₁-norm algorithm. This is 10 times faster than the iterative solution computed by conjugate gradients

To the bibliography