To see the other types of publications on this topic, follow the link: Inverse Problems in Imaging.

Dissertations / Theses on the topic 'Inverse Problems in Imaging'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Inverse Problems in Imaging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leung, Wun Ying Valerie. "Inverse problems in astronomical and general imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2002. http://hdl.handle.net/10092/7513.

Full text
Abstract:
The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object.
APA, Harvard, Vancouver, ISO, and other styles
2

Szasz, Teodora. "Advanced beamforming techniques in ultrasound imaging and the associated inverse problems." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30221/document.

Full text
Abstract:
L'imagerie ultrasonore (US) permet de réaliser des examens médicaux non invasifs avec des méthodes d'acquisition rapides à des coûts modérés. L'imagerie cardiaque, abdominale, fœtale, ou mammaire sont quelques-unes des applications où elle est largement utilisée comme outil de diagnostic. En imagerie US classique, des ondes acoustiques sont transmises à une région d'intérêt du corps humain. Les signaux d'écho rétrodiffusés, sont ensuite formés pour créer des lignes radiofréquences. La formation de voies (FV) joue un rôle clé dans l'obtention des images US, car elle influence la résolution et le contraste de l'image finale. L'objectif de ce travail est de modéliser la formation de voies comme un problème inverse liant les données brutes aux signaux RF. Le modèle de formation de voies proposé ici améliore le contraste et la résolution spatiale des images échographiques par rapport aux techniques de FV existants. Dans un premier temps, nous nous sommes concentrés sur des méthodes de FV en imagerie US. Nous avons brièvement passé en revue les techniques de formation de voies les plus courantes, en commencent par la méthode par retard et somme standard puis en utilisant les techniques de formation de voies adaptatives. Ensuite, nous avons étudié l'utilisation de signaux qui exploitent une représentation parcimonieuse de l'image US dans le cadre de la formation de voies. Les approches proposées détectent les réflecteurs forts du milieu sur la base de critères bayésiens. Nous avons finalement développé une nouvelle façon d'aborder la formation de voies en imagerie US, en la formulant comme un problème inverse linéaire liant les échos réfléchis au signal final. L'intérêt majeur de notre approche est la flexibilité dans le choix des hypothèses statistiques sur le signal avant la formation de voies et sa robustesse dans à un nombre réduit d'émissions. Finalement, nous présentons une nouvelle méthode de formation de voies pour l'imagerie US basée sur l'utilisation de caractéristique statistique des signaux supposée alpha-stable
Ultrasound (US) allows non-invasive and ultra-high frame rate imaging procedures at reduced costs. Cardiac, abdominal, fetal, and breast imaging are some of the applications where it is extensively used as diagnostic tool. In a classical US scanning process, short acoustic pulses are transmitted through the region-of-interest of the human body. The backscattered echo signals are then beamformed for creating radiofrequency(RF) lines. Beamforming (BF) plays a key role in US image formation, influencing the resolution and the contrast of final image. The objective of this thesis is to model BF as an inverse problem, relating the raw channel data to the signals to be recovered. The proposed BF framework improves the contrast and the spatial resolution of the US images, compared with the existing BF methods. To begin with, we investigated the existing BF methods in medical US imaging. We briefly review the most common BF techniques, starting with the standard delay-and-sum BF method and emerging to the most known adaptive BF techniques, such as minimum variance BF. Afterwards, we investigated the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well-known Bayesian Information Criteria used in statistical modeling. Furthermore, we propose a new way of addressing the BF in US imaging, by formulating it as a linear inverse problem relating the reflected echoes to the signal to be recovered. Our approach offers flexibility in the choice of statistical assumptions on the signal to be beamformed and it is robust to a reduced number of pulse emissions. At the end of this research, we investigated the use of the non-Gaussianity properties of the RF signals in the BF process, by assuming alpha-stable statistics of US images
APA, Harvard, Vancouver, ISO, and other styles
3

Gregson, James. "Applications of inverse problems in fluids and imaging." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54081.

Full text
Abstract:
Three applications of inverse problems relating to fluid imaging and image deblurring are presented. The first two, tomographic reconstruction of dye concentration fields from multi-view video and deblurring of photographs, are addressed by a stochastic optimization scheme that allows a wide variety of priors to be incorporated into the reconstruction process within a straightforward framework. The third, estimation of fluid velocities from volumetric dye concentration fields, highlights a previously unexplored connection between fluid simulation and proximal algorithms from convex optimization. This connection allows several classical imaging inverse problems to be investigated in the context of fluids, including optical flow, denoising and deconvolution. The connection also allows inverse problems to be incorporated into fluid simulation for the purposes of physically-based regularization of optical flow and for stylistic modifications of fluid captures. Through both methods and all three applications the importance of incorporating domain-specific priors into inverse problems for fluids and imaging is highlighted.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Lecharlier, Loïc. "Blind inverse imaging with positivity constraints." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209240.

Full text
Abstract:
Dans les problèmes inverses en imagerie, on suppose généralement connu l’opérateur ou matrice décrivant le système de formation de l’image. De façon équivalente pour un système linéaire, on suppose connue sa réponse impulsionnelle. Toutefois, ceci n’est pas une hypothèse réaliste pour de nombreuses applications pratiques pour lesquelles cet opérateur n’est en fait pas connu (ou n’est connu qu’approximativement). On a alors affaire à un problème d’inversion dite “aveugle”. Dans le cas de systèmes invariants par translation, on parle de “déconvolution aveugle” car à la fois l’image ou objet de départ et la réponse impulsionnelle doivent être estimées à partir de la seule image observée qui résulte d’une convolution et est affectée d’erreurs de mesure. Ce problème est notoirement difficile et pour pallier les ambiguïtés et les instabilités numériques inhérentes à ce type d’inversions, il faut recourir à des informations ou contraintes supplémentaires, telles que la positivité qui s’est avérée un levier de stabilisation puissant dans les problèmes d’imagerie non aveugle. La thèse propose de nouveaux algorithmes d’inversion aveugle dans un cadre discret ou discrétisé, en supposant que l’image inconnue, la matrice à inverser et les données sont positives. Le problème est formulé comme un problème d’optimisation (non convexe) où le terme d’attache aux données à minimiser, modélisant soit le cas de données de type Poisson (divergence de Kullback-Leibler) ou affectées de bruit gaussien (moindres carrés), est augmenté par des termes de pénalité sur les inconnues du problème. La stratégie d’optimisation consiste en des ajustements alternés de l’image à reconstruire et de la matrice à inverser qui sont de type multiplicatif et résultent de la minimisation de fonctions coût “surrogées” valables dans le cas positif. Le cadre assez général permet d’utiliser plusieurs types de pénalités, y compris sur la variation totale (lissée) de l’image. Une normalisation éventuelle de la réponse impulsionnelle ou de la matrice est également prévue à chaque itération. Des résultats de convergence pour ces algorithmes sont établis dans la thèse, tant en ce qui concerne la décroissance des fonctions coût que la convergence de la suite des itérés vers un point stationnaire. La méthodologie proposée est validée avec succès par des simulations numériques relatives à différentes applications telle que la déconvolution aveugle d'images en astronomie, la factorisation en matrices positives pour l’imagerie hyperspectrale et la déconvolution de densités en statistique.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Wenlong. "Forward and Inverse Problems Under Uncertainty." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE024/document.

Full text
Abstract:
Cette thèse contient deux matières différentes. Dans la première partie, deux cas sont considérés. L'un est le modèle plus lisse de la plaque mince et l'autre est les équations des limites elliptiques avec des données limites incertaines. Dans cette partie, les convergences stochastiques des méthodes des éléments finis sont prouvées pour chaque problème.Dans la deuxième partie, nous fournissons une analyse mathématique du problème inverse linéarisé dans la tomographie d'impédance électrique multifréquence. Nous présentons un cadre mathématique et numérique pour une procédure d'imagerie du tenseur de conductivité électrique anisotrope en utilisant une nouvelle technique appelée Tentomètre de diffusion Magnéto-acoustographie et proposons une approche de contrôle optimale pour reconstruire le facteur de propriété intrinsèque reliant le tenseur de diffusion au tenseur de conductivité électrique anisotrope. Nous démontrons la convergence et la stabilité du type Lipschitz de l'algorithme et présente des exemples numériques pour illustrer sa précision. Le modèle cellulaire pour Electropermécanisme est démontré. Nous étudions les paramètres efficaces dans un modèle d'homogénéisation. Nous démontrons numériquement la sensibilité de ces paramètres efficaces aux paramètres microscopiques critiques régissant l'électropermécanisme
This thesis contains two different subjects. In first part, two cases are considered. One is the thin plate spline smoother model and the other one is the elliptic boundary equations with uncertain boundary data. In this part, stochastic convergences of the finite element methods are proved for each problem.In second part, we provide a mathematical analysis of the linearized inverse problem in multifrequency electrical impedance tomography. We present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor using a novel technique called Diffusion Tensor Magneto-acoustography and propose an optimal control approach for reconstructing the cross-property factor relating the diffusion tensor to the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy. The cell model for Electropermeabilization is demonstrated. We study effective parameters in a homogenization model. We demonstrate numerically the sensitivity of these effective parameters to critical microscopic parameters governing electropermeabilization
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Sha. "A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00844748.

Full text
Abstract:
Synthetic Aperture Radar (SAR) imaging is a well-known technique in the domain of remote sensing, aerospace surveillance, geography and mapping. To obtain images of high resolution under noise, taking into account of the characteristics of targets in the observed scene, the different uncertainties of measure and the modeling errors becomes very important.Conventional imaging methods are based on i) over-simplified scene models, ii) a simplified linear forward modeling (mathematical relations between the transmitted signals, the received signals and the targets) and iii) using a very simplified Inverse Fast Fourier Transform (IFFT) to do the inversion, resulting in low resolution and noisy images with unsuppressed speckles and high side lobe artifacts.In this thesis, we propose to use a Bayesian approach to SAR imaging, which overcomes many drawbacks of classical methods and brings high resolution, more stable images and more accurate parameter estimation for target recognition.The proposed unifying approach is used for inverse problems in Mono-, Bi- and Multi-static SAR imaging, as well as for micromotion target imaging. Appropriate priors for modeling different target scenes in terms of target features enhancement during imaging are proposed. Fast and effective estimation methods with simple and hierarchical priors are developed. The problem of hyperparameter estimation is also handled in this Bayesian approach framework. Results on synthetic, experimental and real data demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Alfowzan, Mohammed Fowzan, and Mohammed Fowzan Alfowzan. "Solutions to Space-Time Inverse Problems." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621791.

Full text
Abstract:
Two inverse problems are investigated in this dissertation, taking into account both the spatial and temporal aspects. The first problem addresses the under determined image reconstruction problem for dynamic SPECT. The quality of the reconstructed image is often limited due to having fewer observations than the number of voxels. The proposed algorithms make use of the generalized α-divergence function to improve the estimation performance. The first algorithm is based on an alternating minimization framework to minimize a regularized α-divergence objective function. We demonstrate that selecting an adaptive α policy depending on the time evolution of the voxels gives better performance than a fixed α assignment. The second algorithm is based on Newton's method. A regularized approach has been taken to avoid stability issues. Newton's method is generally computationally demanding due to the complexity associated with inverting the Hessian matrix. A fast Newton-based method is proposed using majorization-minimization techniques that diagonalize the Hessian matrix. In dynamically evolving systems, the prediction matrix plays an important role in the estimation process. An estimation technique is proposed to estimate the prediction matrix using the α-divergence function. The simulation results show that our algorithms provide better performance than the techniques based on the Kullback-Leibler distance. The second problem is the recovery of data transmitted over free-space optical communication channels using orbital angular momentum (OAM). In the presence of atmospheric turbulence, crosstalk occurs among OAM optical modes resulting in an error floor at a relatively high bit error rate. The modulation format considered for the underlying problem is Q-ary pulse position modulation (PPM). We propose and evaluate three joint detection strategies to overcome the OAM crosstalk problem: i) maximum likelihood sequence estimation (MLSE). ii) Q-PPM factor graph detection. iii) branch-and-bound detection. We compare the complexity and the bit-error-rate performance of these strategies in realistic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
8

Rückert, Nadja. "Studies on two specific inverse problems from imaging and finance." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-91587.

Full text
Abstract:
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
APA, Harvard, Vancouver, ISO, and other styles
9

Som, Subhojit. "Topics in Sparse Inverse Problems and Electron Paramagnetic Resonance Imaging." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282135281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zamanian, Sam Ahmad. "Hierarchical Bayesian approaches to seismic imaging and other geophysical inverse problems." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92970.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 189-196).
In many geophysical inverse problems, smoothness assumptions on the underlying geologic model are utilized to mitigate the effects of poor data coverage and observational noise and to improve the quality of the inferred model parameters. In the context of Bayesian inference, these smoothness assumptions take the form of a prior distribution on the model parameters. Conventionally, the regularization parameters defining these assumptions are fixed independently from the data or tuned in an ad hoc manner. However, it is often the case that the smoothness properties of the true earth model are not known a priori, and furthermore, these properties may vary spatially. In the seismic imaging problem, for example, where the objective is to estimate the earth's reflectivity, the reflectivity model is smooth along a particular reflector but exhibits a sharp contrast in the direction orthogonal to the reflector. In such cases, defining a prior using predefined smoothness assumptions may result in posterior estimates of the model that incorrectly smooth out these sharp contrasts. In this thesis, we explore the application of Bayesian inference to different geophysical inverse problems and seek to address issues related to smoothing by appealing to the hierarchical Bayesian framework. We capture the smoothness properties of the prior distribution on the model by defining a Markov random field (MRF) on the set of model parameters and assigning weights to the edges of the underlying graph; we refer to these parameters as the edge strengths of the MRF. We investigate two cases where the smoothing is specified a priori and introduce a method for estimating the edge strengths of the MRF. In the first part of this thesis, we apply a Bayesian inference framework (where the edge strengths of the MRF are predetermined) to the problem of characterizing the fractured nature of a reservoir from seismic data. Our methodology combines different features of the seismic data, particularly P-wave reflection amplitudes and scattering attributes, to allow for estimation of fracture properties under a larger physical regime than would be attainable using only one of these data types. Through this application, we demonstrate the capability of our parameterization of the prior distribution with edge strengths to both enforce smoothness in the estimates of the fracture properties and capture a priori information about geological features in the model (such as a discontinuity that may arise in the presence of a fault). We solve the inference problem via loopy belief propagation to approximate the posterior marginal distributions of the fracture properties, as well as their maximum a posteriori (MAP) and Bayes least squares estimates. In the second part of the thesis, we investigate how the parameters defining the prior distribution are connected to the model covariance and address the question of how to optimize these parameters in the context of the seismic imaging problem. We formulate the seismic imaging problem within the hierarchical Bayesian setting, where the edge strengths are treated as random variables to be inferred from the data, and provide a framework for computing the marginal MAP estimate of the edge strengths by application of the expectation-maximization (E-M) algorithm. We validate our methodology on synthetic datasets arising from 2-D models. The images we obtain after inferring the edge strengths exhibit the desired spatially-varying smoothness properties and yield sharper, more coherent reflectors. In the final part of the thesis, we shift our focus and consider the problem of timelapse seismic processing, where the objective is to detect changes in the subsurface over a period of time using repeated seismic surveys. We focus on the realistic case where the surveys are taken with differing acquisition geometries. In such situations, conventional methods for processing time-lapse data involve inverting surveys separately and subtracting the inversion models to estimate the change in model parameters; however, such methods often perform poorly as they do not correctly account for differing model uncertainty between surveys due to differences in illumination and observational noise. Applying the machinery explored in the previous chapters, we formulate the time-lapse processing problem within the hierarchical Bayesian setting and present a framework for computing the marginal MAP estimate of the time-lapse change model using the E-M algorithm. The results of our inference framework are validated on synthetic data from a 2-D time-lapse seismic imaging example, where the hierarchical Bayesian estimates significantly outperform conventional time-lapse inversion results.
by Sam Ahmad Zamanian.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Bhandari, Ayush. "Inverse problems in time-of-flight imaging : theory, algorithms and applications." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95867.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 100-108).
Time-of-Fight (ToF) cameras utilize a combination of phase and amplitude information to return real-time, three dimensional information of a scene in form of depth images. Such cameras have a number of scientific and consumer oriented applications. In this work, we formalize a mathematical framework that leads to unifying perspective on tackling inverse problems that arise in the ToF imaging context. Starting from first principles, we discuss the implications of time and frequency domain sensing of a scene. From a linear systems perspective, this amounts to an operator sampling problem where the operator depends on the physical parameters of a scene or the bio-sample being investigated. Having presented some examples of inverse problems, we discuss detailed solutions that benefit from scene based priors such sparsity and rank constraints. Our theory is corroborated by experiments performed using ToF/Kinect cameras. Applications of this work include multi-bounce light decomposition, ultrafast imaging and fluorophore lifetime estimation.
by Ayush Bhandari.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

Yin, Ke. "New algorithms for solving inverse source problems in imaging techniques with applications in fluorescence tomography." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48945.

Full text
Abstract:
This thesis is devoted to solving the inverse source problem arising in image reconstruction problems. In general, the solution is non-unique and the problem is severely ill-posed. Therefore, small perturbations, such as the noise in the data, and the modeling error in the forward problem, will cause huge errors in the computations. In practice, the most widely used method to tackle the problem is based on Tikhonov-type regularizations, which minimizes a cost function combining a regularization term and a data fitting term. However, because the two tasks, namely regularization and data fitting, are coupled together in Tikhonov regularization, they are difficult to solve. It happens even if each task can be efficiently solved when they are separate. We propose a method to overcome the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, separately. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills the regularization and other physical requirements. The key idea is that the correction function in the kernel has no impact to the data fitting, and the regularization is imposed in a smaller space. Moreover, there is no parameter needed to balance the data fitting and regularization terms. As a case study, we apply the proposed method to Fluorescence Tomography (FT), an emerging imaging technique well known for its ill-posedness and low image resolution in existing reconstruction techniques. We demonstrate by theory and examples that the proposed algorithm can drastically improve the computation speed and the image resolution over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Javanmard, Mehdi. "Inverse problem approach to ultrasound medical imaging." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0001/NQ31933.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wintz, Timothée. "Super-resolution in wave imaging." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE052/document.

Full text
Abstract:
Les différentes modalités d’imagerie par ondes présentent chacune des limitations en termes de résolution ou de contraste. Dans ce travail, nous modélisons l’imagerie ultrasonore ultrarapide et présentons des méthodes de reconstruction qui améliorent la précision de l’imagerie ultrasonore. Nous introduisons deux méthodes qui permettent d’augmenter le contraste et de mesurer la position super-résolue et la vitesse dans les vaisseaux sanguins. Nous présentons aussi une méthode de reconstruction des paramètres microscopiques en tomographie d’impédance électrique en utilisant des mesures multifréquence et en s’aidant de la théorie de l’homogénéisation
Different modalities in wave imaging each present limitations in terms of resolution or contrast. In this work, we present a mathematical model of the ultrafast ultrasound imaging modality and reconstruction methods which can improve contrast and resolution in ultrasonic imaging. We introduce two methods which allow to improve contrast and to locate blood vessels belowthe diffraction limit while simultaneously estimating the blood velocity. We also present a reconstruction method in electrical impedance tomography which allows reconstruction of microscopic parameters from multi-frequency measurements using the theory of homogenization
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Xiaobei. "Instrumentation and inverse problem solving for impedance imaging /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hugelier, Siewert. "Approaches to inverse problems in chemical imaging : applications in super-resolution and spectral unmixing." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10144/document.

Full text
Abstract:
L’imagerie chimique permet d’accéder à la distribution spatiale des espèces chimiques. Nous distinguerons dans cette thèse deux types d’images différents: les images spatiales-temporelles et les images spatiales-spectrales.La microscopie de fluorescence super-résolue a commencé avec un faible nombre de fluorophores actifs par image. Actuellement, ça a évolué vers l’imagerie en haute densité qui requiert de nouvelles façons d’analyse. Nous proposons SPIDER, une approche de déconvolution par moindres carrés pénalisés. La considération de plusieurs pénalités permet de traduire les propriétés des émetteurs utilisés dans l'imagerie de fluorescence super-résolue. L'utilisation de cette méthode permet d'étudier des changements structuraux et morphologiques dans les échantillons biologiques. La méthode a été appliquée à l’imagerie sur cellules vivantes d’une cellule HEK-293T encodée par la protéine fluorescente DAKAP-Dronpa. On a pu obtenir une résolution spatiale de 55nm pour un temps d’acquisition de 0.5s.La résolution d'images hyperspectrales avec MCR-ALS fournit des informations spatiales et spectrales des contributions individuelles dans le mélange. Néanmoins, le voisinage des pixels est perdu du fait du dépliement du cube de données hyperspectrales sous forme d’une matrice bidirectionnelle. L’implémentation de contraintes spatiales n’est donc pas possible en MCR-ALS. Nous proposons une approche alternative dans laquelle une étape de repliement/dépliement est effectuée à chaque itération qui permet d’ajouter des fonctionnalités spatiales globales à la palette des contraintes. Nous avons développé plusieurs contraintes et on montre leur application aux données expérimentales
Besides the chemical information, chemical imaging also offers insights in the spatial distribution of the samples. Within this thesis, we distinguish between two different types of images: spatial-temporal images (super-resolution fluorescence microscopy) and spatial-spectral images (unmixing). In early super-resolution fluorescence microscopy, a low number of fluorophores were active per image. Currently, the field evolves towards high-density imaging that requires new ways of analysis. We propose SPIDER, an image deconvolution approach with multiple penalties. These penalties directly translate the properties of the blinking emitters used in super-resolution fluorescence microscopy imaging. SPIDER allows investigating highly dynamic structural and morphological changes in biological samples with a high fluorophore density. We applied the method on live-cell imaging of a HEK-293T cell labeled with DAKAP-Dronpa and demonstrated a spatial resolution down to 55 nm and a time sampling of 0.5 s. Unmixing hyperspectral images with MCR-ALS provides spatial and spectral information of the individual contributions in the mixture. Due to loss of the pixel neighborhood during the unfolding of the hyperspectral data cube to a two-way matrix, spatial information cannot be added as a constraint during the analysis We therefore propose an alternative approach in which an additional refolding/unfolding step is performed in each iteration. This data manipulation allows global spatial features to be added to the palette of MCR-ALS constraints. From this idea, we also developed several constraints and show their application on experimental data
APA, Harvard, Vancouver, ISO, and other styles
17

Wei, Hsin-Yu. "Magnetic induction tomography for medical and industrial imaging : hardware and software development." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558901.

Full text
Abstract:
The main topics of this dissertation are the hardware and the software developments in magnetic induction tomography imaging techniques. In the hardware sections, all the tomography systems developed by the author will be presented and discussed in detail. The developed systems can be divided into two categories, according to the property of the target imaging materials: high conductivity materials and low conductivity materials. Each system has its own suitable application, and each will thus be tested under different circumstances. In terms of the software development, the forward and inverse problems have been studied, including the eddy current problem modeling, sensitivity map formulae derivation and iterative/non-iterative inverse solvers equations. The Biot-Savart Theory was implemented in the ‘two-potential’ method that was used in the eddy current model in order to improve the system’s flexibility. Many different magnetic induction tomography schemes are proposed for the first time in this field of research, their aim being to improve the spatial and temporal resolution of the final reconstructed images. These novel schemes usually involve some modifications of the system hardware and forward/inverse calculations. For example, the rotational scheme can improve the ill-posedness and edge detectability of the system; the volumetric scheme can provide extra spatial resolution in the axial direction; and the temporal scheme can improve the temporal resolution by using the correlation between the consecutive datasets. Volumetric imaging requires an intensive amount of extra computational resources. To overcome the issue of memory constraints when solving large-scale inverse problems, a matrix-free method was proposed, also for the first time in magnetic induction tomography. All the proposed algorithms are verified by the experimental data obtained from suitable tomography systems developed by the author. Although magnetic induction tomography is a new imaging technique, it is believed that the technique is well developed for real-life applications. Several potential applications for magnetic induction tomography are suggested. The initial proof-of-concept study for a challenging low conductivity two-phase flow imaging process is provided. In this thesis, a range of contributions have been made in the field of magnetic induction tomography, which will help the magnetic induction tomography research to be carried on further.
APA, Harvard, Vancouver, ISO, and other styles
18

GUASTAVINO, SABRINA. "Learning and inverse problems: from theory to solar physics applications." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/998315.

Full text
Abstract:
The problem of approximating a function from a set of discrete measurements has been extensively studied since the seventies. Our theoretical analysis proposes a formalization of the function approximation problem which allows dealing with inverse problems and supervised kernel learning as two sides of the same coin. The proposed formalization takes into account arbitrary noisy data (deterministically or statistically defined), arbitrary loss functions (possibly seen as a log-likelihood), handling both direct and indirect measurements. The core idea of this part relies on the analogy between statistical learning and inverse problems. One of the main evidences of the connection occurring across these two areas is that regularization methods, usually developed for ill-posed inverse problems, can be used for solving learning problems. Furthermore, spectral regularization convergence rate analyses provided in these two areas, share the same source conditions but are carried out with either increasing number of samples in learning theory or decreasing noise level in inverse problems. Even more in general, regularization via sparsity-enhancing methods is widely used in both areas and it is possible to apply well-known $ell_1$-penalized methods for solving both learning and inverse problems. In the first part of the Thesis, we analyze such a connection at three levels: (1) at an infinite dimensional level, we define an abstract function approximation problem from which the two problems can be derived; (2) at a discrete level, we provide a unified formulation according to a suitable definition of sampling; and (3) at a convergence rates level, we provide a comparison between convergence rates given in the two areas, by quantifying the relation between the noise level and the number of samples. In the second part of the Thesis, we focus on a specific class of problems where measurements are distributed according to a Poisson law. We provide a data-driven, asymptotically unbiased, and globally quadratic approximation of the Kullback-Leibler divergence and we propose Lasso-type methods for solving sparse Poisson regression problems, named PRiL for Poisson Reweighed Lasso and an adaptive version of this method, named APRiL for Adaptive Poisson Reweighted Lasso, proving consistency properties in estimation and variable selection, respectively. Finally we consider two problems in solar physics: 1) the problem of forecasting solar flares (learning application) and 2) the desaturation problem of solar flare images (inverse problem application). The first application concerns the prediction of solar storms using images of the magnetic field on the sun, in particular physics-based features extracted from active regions from data provided by Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). The second application concerns the reconstruction problem of Extreme Ultra-Violet (EUV) solar flare images recorded by a second instrument on board SDO, the Atmospheric Imaging Assembly (AIA). We propose a novel sparsity-enhancing method SE-DESAT to reconstruct images affected by saturation and diffraction, without using any a priori estimate of the background solar activity.
APA, Harvard, Vancouver, ISO, and other styles
19

Burvall, Anna. "Axicon imaging by scalar diffraction theory." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3736.

Full text
Abstract:

Axicons are optical elements that produce Bessel beams,i.e., long and narrow focal lines along the optical axis. Thenarrow focus makes them useful ine.g. alignment, harmonicgeneration, and atom trapping, and they are also used toincrease the longitudinal range of applications such astriangulation, light sectioning, and optical coherencetomography. In this thesis, axicons are designed andcharacterized for different kinds of illumination, using thestationary-phase and the communication-modes methods.

The inverse problem of axicon design for partially coherentlight is addressed. A design relation, applicable toSchell-model sources, is derived from the Fresnel diffractionintegral, simplified by the method of stationary phase. Thisapproach both clarifies the old design method for coherentlight, which was derived using energy conservation in raybundles, and extends it to the domain of partial coherence. Thedesign rule applies to light from such multimode emitters aslight-emitting diodes, excimer lasers and some laser diodes,which can be represented as Gaussian Schell-model sources.

Characterization of axicons in coherent, obliqueillumination is performed using the method of stationary phase.It is shown that in inclined illumination the focal shapechanges from the narrow Bessel distribution to a broadasteroid-shaped focus. It is proven that an axicon ofelliptical shape will compensate for this deformation. Theseresults, which are all confirmed both numerically andexperimentally, open possibilities for using axicons inscanning optical systems to increase resolution and depthrange.

Axicons are normally manufactured as refractive cones or ascircular diffractive gratings. They can also be constructedfrom ordinary spherical surfaces, using the sphericalaberration to create the long focal line. In this dissertation,a simple lens axicon consisting of a cemented doublet isdesigned, manufactured, and tested. The advantage of the lensaxicon is that it is easily manufactured.

The longitudinal resolution of the axicon varies. The methodof communication modes, earlier used for analysis ofinformation content for e.g. line or square apertures, isapplied to the axicon geometry and yields an expression for thelongitudinal resolution. The method, which is based on abi-orthogonal expansion of the Green function in the Fresneldiffraction integral, also gives the number of degrees offreedom, or the number of information channels available, forthe axicon geometry.

Keywords:axicons, diffractive optics, coherence,asymptotic methods, communication modes, information content,inverse problems

APA, Harvard, Vancouver, ISO, and other styles
20

Camargo, Erick Darío León Bueno de. "Desenvolvimento de algoritmo de imagens absolutas de tomografia por impedância elétrica para uso clínico." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-26062014-205827/.

Full text
Abstract:
A Tomografia de Impedância Elétrica é uma técnica de obtenção de imagens não invasiva que pode ser usada em aplicações clínicas para estimar a impeditividade dos tecidos a partir de medidas elétricas na superfície do corpo. Matematicamente este é um problema inverso, não linear e mal posto. Geralmente é usado um filtro espacial Gaussiano passa alta como método de regularização para resolver o problema inverso. O objetivo principal deste trabalho é propor o uso de informação estatística fisiológica e anatômica da distribuição de resistividades dos tecidos do tórax, também chamada de atlas anatômico, em conjunto com o filtro Gaussiano como métodos de regularização. A metodologia proposta usa o método dos elementos finitos e o algoritmo de Gauss-Newton para reconstruir imagens de resistividade tridimensionais. A Teoria do Erro de Aproximação é utilizada para reduzir os erros relacionados à discretização e dimensões da malha de elementos finitos. Dados de tomografia de impedância elétrica e imagens de tomografia computadorizada coletados in vivo em um suíno com diferentes alterações fisiológicas pulmonares foram utilizados para validar o algoritmo proposto. As imagens obtidas foram consistentes com os fenômenos de atelectasia, derrame pleural, pneumotórax e variações associadas a diferentes níveis de pressão durante a ventilação mecânica. Os resultados mostram que a reconstrução de imagens de suínos com informação clínica significativa é possível quando tanto o filtro Gaussiano quanto o atlas anatômico são usados como métodos de regularização.
Electrical Impedance Tomography is a non invasive imaging technique that can be used in clinical applications to infer living tissue impeditivity from boundary electrical measurements. Mathematically this is an non-linear ill-posed inverse problem. Usually a spatial high-pass Gaussian filter is used as a regularization method for solving the inverse problem. The main objective of this work is to propose the use of physiological and anatomical priors of tissue resistivity distribution within the thorax, also known as anatomical atlas, in conjunction with the Gaussian filter as regularization methods. The proposed methodology employs the finite element method and the Gauss-Newton algorithm in order to reconstruct three-dimensional resistivity images. The Approximation Error Theory is used to reduce discretization effects and mesh size errors. Electrical impedance tomography data and computed tomography images of physiological pulmonary changes collected in vivo in a swine were used to validate the proposed method. The images obtained are compatible with atelectasis, pneumothorax, pleural effusion and different ventilation pressures during mechanical ventilation. The results show that image reconstruction from swines with clinically significant information is feasible when both the Gaussian filter and the anatomical atlas are used as regularization methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Alberti, Giovanni S. "On local constraints and regularity of PDE in electromagnetics : applications to hybrid imaging inverse problems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1b30b3b7-29b1-410d-ae30-bd0a87c9720b.

Full text
Abstract:
The first contribution of this thesis is a new regularity theorem for time harmonic Maxwell's equations with less than Lipschitz complex anisotropic coefficients. By using the Lp theory for elliptic equations, it is possible to prove H1 and Hölder regularity results, provided that the coefficients are W1,p for some p = 3. This improves previous regularity results, where the assumption W1,∞ for the coefficients was believed to be optimal. The method can be easily extended to the case of bi-anisotropic materials, for which a separate approach turns out to be unnecessary. The second focus of this work is the boundary control of the Helmholtz and Maxwell equations to enforce local constraints inside the domain. More precisely, we look for suitable boundary conditions such that the corresponding solutions and their derivatives satisfy certain local non-zero constraints. Complex geometric optics solutions can be used to construct such illuminations, but are impractical for several reasons. We propose a constructive approach to this problem based on the use of multiple frequencies. The suitable boundary conditions are explicitly constructed and give the desired constraints, provided that a finite number of frequencies, given a priori, are chosen in a fixed range. This method is based on the holomorphicity of the solutions with respect to the frequency and on the regularity theory for the PDE under consideration. This theory finds applications to several hybrid imaging inverse problems, where the unknown coefficients have to be imaged from internal measurements. In order to perform the reconstruction, we often need to find suitable boundary conditions such that the corresponding solutions satisfy certain non-zero constraints, depending on the particular problem under consideration. The multiple frequency approach introduced in this thesis represents a valid alternative to the use of complex geometric optics solutions to construct such boundary conditions. Several examples are discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Cao, Xiande. "Volume and Surface Integral Equations for Solving Forward and Inverse Scattering Problems." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/65.

Full text
Abstract:
In this dissertation, a hybrid volume and surface integral equation is used to solve scattering problems. It is implemented with RWG basis on the surface and the edge basis in the volume. Numerical results shows the correctness of the hybrid VSIE in inhomogeneous medium. The MLFMM method is also implemented for the new VSIEs. Further more, a synthetic apature radar imaging method is used in a 2D microwave imaging for complex objects. With the mono-static and bi-static interpolation scheme, a 2D FFT is applied for the imaging with the data simulated with VSIE method. Then we apply a background cancelling scheme to improve the imaging quality for the targets in interest. Numerical results shows the feasibility of applying the background canceling into wider applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Yong Yook. "Inverse Problems In Structural Damage Identification, Structural Optimization, And Optical Medical Imaging Using Artificial Neural Networks." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/11111.

Full text
Abstract:
The objective of this work was to employ artificial neural networks (NN) to solve inverse problems in different engineering fields, overcoming various obstacles in applying NN to different problems and benefiting from the experience of solving different types of inverse problems. The inverse problems investigated are: 1) damage detection in structures, 2) detection of an anomaly in a light-diffusive medium, such as human tissue using optical imaging, 3) structural optimization of fiber optic sensor design. All of these problems require solving highly complex inverse problems and the treatments benefit from employing neural networks which have strength in generalization, pattern recognition, and fault tolerance. Moreover, the neural networks for the three problems are similar, and a method found suitable for solving one type of problem can be applied for solving other types of problems. Solution of inverse problems using neural networks consists of two parts. The first is repeatedly solving the direct problem, obtaining the response of a system for known parameters and constructing the set of the solutions to be used as training sets for NN. The next step is training neural networks so that the trained neural networks can produce a set of parameters of interest for the response of the system. Mainly feed-forward backpropagation NN were used in this work. One of the obstacles in applying artificial neural networks is the need for solving the direct problem repeatedly and generating a large enough number of training sets. To reduce the time required in solving the direct problems of structural dynamics and photon transport in opaque tissue, the finite element method was used. To solve transient problems, which include some of the problems addressed here, and are computationally intensive, the modal superposition and the modal acceleration methods were employed. The need for generating a large enough number of training sets required by NN was fulfilled by automatically generating the training sets using a script program in the MATLAB environment. This program automatically generated finite element models with different parameters, and the program also included scripts that combined the whole solution processes in different engineering packages for the direct problem and the inverse problem using neural networks. Another obstacle in applying artificial neural networks in solving inverse problems is that the dimension and the size of the training sets required for the NN can be too large to use NN effectively with the available computational resources. To overcome this obstacle, Principal Component Analysis is used to reduce the dimension of the inputs for the NN without excessively impairing the integrity of the data. Orthogonal Arrays were also used to select a smaller number of training sets that can efficiently represent the given system.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Nicu, Ana-Maria. "Approximation and representation of functions on the sphere : applications to inverse problems in geodesy and medical imaging." Nice, 2012. http://www.theses.fr/2012NICE4007.

Full text
Abstract:
Cette thèse est construite autour de l’approximation et de la représentation des fonctions sur la sphère avec des applications pour des problèmes inverses issus de la géodésie et de l’imagerie médicale. Le plan de la thèse est structuré de la façon suivante : Dans le premier chapitre, on donne le cadre général d’un problème inverse ainsi que la description du problème de la géophysique et de la M/EGG. L’idée d’un problème inverse est de retrouver une densité à l’intérieur d’un domaine (la boule unité modélisant la terre ou le cerveau humain), à partir des données des mesures d’un certain potentiel à la surface du domaine. On continue en donnant les principales définitions et les principaux théorèmes qu’on utilisera tout au long de la thèse. De plus, la résolution du problème inverse consiste dans la résolution de deux problèmes : transmission de données et localisation de sources à l’intérieur de la boule. En pratique, les données mesurées ne sont disponibles que sur des parties de la sphère : calottes sphériques, hémisphère nord de la tête (M/EEG), continents (géodésie). Pour représenter ce type de données, on construit la base de Slepian qui a de bonnes propriétés sur les régions étudiées. Dans le chapitre 4 on s’intéresse au problème d’estimation de données sur la sphère entière (leur développement sous la base des harmoniques sphériques) à partir des mesures partielles bruitées. Une fois qu’on connaît ce développement, on applique la méthode du meilleur approximant rationnel sur des sections planes de la sphère (chapitre 5). Ce chapitre traite trois types de densité : monopolaire, dipolaire et inclusions pour la modélisation des problèmes, ainsi que des propriétés de la densité et du potentiel associé, quantités mises en relation par un certain opérateur. Dans le chapitre 6, on regarde les chapitres 3, 4 et 5 du point de vue numérique. On présente des tests numériques pour la localisation de sources dans la géodésie et M/EGG lorsqu’on dispose des données partielles sur la sphère
This work concerns the representation and approximation of functions on a sphere with applications to source localization inverse problems in geodesy and medical imaging. The thesis is structured in 6 chapters as follow : Chapter 1 presents an introduction to the geodesy and M/EGG inverse problems. The inverse problem (IP) consists in recovering a density inside the ball (Earth, human brain) from partially known data on the surface. Chapter 2 gives the mathematical background used along the thesis. The resolution of the inverse problem (IP) involves the resolution of two steps : the transmission data problem (TP) and the density recovery (DR) problem. In practice, the data are only available on some region of the sphere, as a spherical cap, like the north hemisphere of the head (M/EGG) or continent (geodesy). For this purpose, in chapter 3, we give an efficient method to build the appropriate Slepian basis on which we express the data. This is set up by using Gauss-Legendre quadrature. The transmission data problem (chapter 4) consists in estimating the data (spherical harmonic expansion) over the whose sphere from noisy measurements expressed in Slepian basis. The second step, density recovery (DR) problem, is detailed in chapter 5 where we study three density models (monopolar, dipolar and inclusions). For the resolution of (DR), we use a best quadratic rational approximation method on planar sections. We give also some properties of the density and the operator which links it to the generated potential. In chapter 6, we study the chapter 3, 4 and 5 from numerical point of view. We present some numerical tests to illustrate source localization results for geodesy and M/EGG problems when we dispose of partial data on the sphere
APA, Harvard, Vancouver, ISO, and other styles
25

Hart, Vern Philip II. "The Application of Tomographic Reconstruction Techniques to Ill-Conditioned Inverse Problems in Atmospheric Science and Biomedical Imaging." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1354.

Full text
Abstract:
A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA’s Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near-infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the diffused signal, simulations must be incorporated into the algorithm, which describe the sporadic photon migration. The third paper presents a novel Monte Carlo technique derived from the optical scattering solution for spheroidal particles designed to mimic mitochondria and deformed cell nuclei. Simulated results of optical diffusion are presented. The potential for improving existing imaging modalities through continual development of sparse tomography and optical scattering methods is discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Dupuy, Clément. "Reconstruction d'image pour l'acousto-optique vers une imagerie quantitative." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLET034.

Full text
Abstract:
La localisation par la Lumière d’objets millimétriques ou sub-millimétriques dans des milieux épais fortement diffusants de plusieurs centimètres d’épaisseur est un réel challenge dans de nombreux domaines comme par exemple la Sécurité (détection de produits dangereux, détection en environnement hostile), mais aussi à la Biologie et la Médecine (détection de tumeurs, balles, études fonctionnelles, ....). La lumière apporte des informations complémentaires (spécificité, concentration, fonctionnalité) grâce à sa sensibilité spectrale, par exemple dans la fenêtre thérapeutique optique (600-1100nm). Le problème majeur est alors celui lié à la diffusion multiple de la lumière qui empêche toute imagerie conventionnelle. La localisation peut cependant être obtenue en appliquant simultanément une excitation ultrasonore (US), balistique dans ces milieux jusqu’à la dizaine de MHz, et en mesurant la quantité de lumière qui a été marquée par les US suite à l’effet acousto-optique (essentiellement un décalage de fréquence sur la porteuse laser). Cette technique est étudiée sous plusieurs angles par quelques laboratoires dans le monde et notamment de manière expérimentale par une équipe à l’Institut Langevin à Paris et de manière numérique au Laboratoire Medical Physics and Bioengineering, UCL à Londres. L’objectif de ma thèse est de mettre en commun le savoir-faire des deux équipes afin d’obtenir des mesures quantitatives des propriétés optiques locales de milieux diffusant
The optical properties of biological tissues are of significant clinical interest. Such media are highly scattering to the near-infrared light which offers the required contrast, and consequently purely optical approaches to imaging tissues at depth suffer from limited spatial resolution. Acousto-optic imaging is a multi-modal technique which overcomes this problem by combining the optical contrast of near infra-red light with the spatial resolution of ultrasound, permitting millimetre resolution at depths of several centimetres. Raw measurements made using the acousto-optic technique are corrupted by the varying optical fluence in the medium. By using inverse problem base reconstructions algorithms, it is possible to reconstruct a map of the absorption coefficient inside the medium. My PhD is conducted between Institut Langevin, in Paris, where my acousto- optics imaging setup is and the Medical Physics and Bioengineering lab in UCL, in London where I work on the reconstruction algorithms in order to achieve quantitative measurement
APA, Harvard, Vancouver, ISO, and other styles
27

Travis, Clive Hathaway. "The inverse problem and applications to optical and eddy current imaging." Thesis, University of Surrey, 1989. http://epubs.surrey.ac.uk/804869/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Veras, Johann. "Electrical Conductivity Imaging via Boundary Value Problems for the 1-Laplacian." Doctoral diss., University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6377.

Full text
Abstract:
We study an inverse problem which seeks to image the internal conductivity map of a body by one measurement of boundary and interior data. In our study the interior data is the magnitude of the current density induced by electrodes. Access to interior measurements has been made possible since the work of M. Joy et al. in early 1990s and couples two physical principles: electromagnetics and magnetic resonance. In 2007 Nachman et al. has shown that it is possible to recover the conductivity from the magnitude of one current density field inside. The method now known as Current Density Impedance Imaging is based on solving boundary value problems for the 1-Laplacian in an appropriate Riemann metric space. We consider two types of methods: the ones based on level sets and a variational approach, which aim to solve specific boundary value problem associated with the 1-Laplacian. We will address the Cauchy and Dirichlet problems with full and partial data, and also the Complete Electrode Model (CEM). The latter model is known to describe most accurately the voltage potential distribution in a conductive body, while taking into account the transition of current from the electrode to the body. For the CEM the problem is non-unique. We characterize the non-uniqueness, and explain which additional measurements fix the solution. Multiple numerical schemes for each of the methods are implemented to demonstrate the computational feasibility.
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
29

Paleo, Pierre. "Méthodes itératives pour la reconstruction tomographique régularisée." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT070/document.

Full text
Abstract:
Au cours des dernières années, les techniques d'imagerie par tomographie se sont diversifiées pour de nombreuses applications. Cependant, des contraintes expérimentales conduisent souvent à une acquisition de données limitées, par exemple les scans rapides ou l'imagerie médicale pour laquelle la dose de rayonnement est une préoccupation majeure. L'insuffisance de données peut prendre forme d'un faible rapport signal à bruit, peu de vues, ou une gamme angulaire manquante. D'autre part, les artefacts nuisent à la qualité de reconstruction. Dans ces contextes, les techniques standard montrent leurs limitations. Dans ce travail, nous explorons comment les méthodes de reconstruction régularisée peuvent répondre à ces défis. Ces méthodes traitent la reconstruction comme un problème inverse, et la solution est généralement calculée par une procédure d'optimisation. L'implémentation de méthodes de reconstruction régularisée implique à la fois de concevoir une régularisation appropriée, et de choisir le meilleur algorithme d'optimisation pour le problème résultant. Du point de vue de la modélisation, nous considérons trois types de régularisations dans un cadre mathématique unifié, ainsi que leur implémentation efficace : la variation totale, les ondelettes et la reconstruction basée sur un dictionnaire. Du point de vue algorithmique, nous étudions quels algorithmes d'optimisation de l'état de l'art sont les mieux adaptés pour le problème et l'architecture parallèle cible (GPU), et nous proposons un nouvel algorithme d'optimisation avec une vitesse de convergence accrue. Nous montrons ensuite comment les modèles régularisés de reconstruction peuvent être étendus pour prendre en compte les artefacts usuels : les artefacts en anneau et les artefacts de tomographie locale. Nous proposons notamment un nouvel algorithme quasi-exact de reconstruction en tomographie locale
In the last years, there have been a diversification of the tomography imaging technique for many applications. However, experimental constraints often lead to limited data - for example fast scans, or medical imaging where the radiation dose is a primary concern. The data limitation may come as a low signal to noise ratio, scarce views or a missing angle wedge.On the other hand, artefacts are detrimental to reconstruction quality.In these contexts, the standard techniques show their limitations.In this work, we explore how regularized tomographic reconstruction methods can handle these challenges.These methods treat the problem as an inverse problem, and the solution is generally found by the means of an optimization procedure.Implementing regularized reconstruction methods entails to both designing an appropriate regularization, and choosing the best optimization algorithm for the resulting problem.On the modelling part, we focus on three types of regularizers in an unified mathematical framework, along with their efficient implementation: Total Variation, Wavelets and dictionary-based reconstruction. On the algorithmic part, we study which state-of-the-art convex optimization algorithms are best fitted for the problem and parallel architectures (GPU), and propose a new algorithm for an increased convergence speed.We then show how the standard regularization models can be extended to take the usual artefacts into account, namely rings and local tomography artefacts. Notably, a novel quasi-exact local tomography reconstruction method is proposed
APA, Harvard, Vancouver, ISO, and other styles
30

Pereira, Antonio. "Acoustic imaging in enclosed spaces." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0066/document.

Full text
Abstract:
Ce travail de recherche porte sur le problème de l'identification des sources de bruit en espace clos. La motivation principale était de proposer une technique capable de localiser et quantifier les sources de bruit à l'intérieur des véhicules industriels, d'une manière efficace en temps. Dans cette optique, la méthode pourrait être utilisée par les industriels à des fins de réduction de bruit, et donc construire des véhicules plus silencieux. Un modèle simplifié basé sur la formulation par sources équivalentes a été utilisé pour résoudre le problème. Nous montrerons que le problème est mal conditionné, dans le sens où il est très sensible face aux erreurs de mesure, et donc des techniques dites de régularisation sont nécessaires. Une étude détaillée de cette question, en particulier le réglage de ce qu'on appelle de paramètre de régularisation, a été important pour assurer la stabilité de la solution. En particulier, un critère de régularisation basé sur une approche bayésienne s'est montré très robuste pour ajuster le paramètre de régularisation de manière optimale. L'application cible concernant des environnements intérieurs relativement grands, nous a imposé des difficultés supplémentaires, à savoir: (a) le positionnement de l'antenne de capteurs à l'intérieur de l'espace; (b) le nombre d'inconnues (sources potentielles) beaucoup plus important que le nombre de positions de mesure. Une formulation par pondération itérative a ensuite été proposé pour surmonter les problèmes ci-dessus de manière à: (1) corriger pour le positionnement de l'antenne de capteurs dans l'habitacle ; (2) obtenir des résultats corrects en terme de quantification des sources identifiées. Par ailleurs, l'approche itérative nous a conduit à des résultats avec une meilleure résolution spatiale ainsi qu'une meilleure dynamique. Plusieurs études numériques ont été réalisées afin de valider la méthode ainsi que d'évaluer sa sensibilité face aux erreurs de modèle. En particulier, nous avons montré que l'approche est affectée par des conditions non-anéchoïques, dans le sens où les réflexions sont identifiées comme des vraies sources. Une technique de post-traitement qui permet de distinguer entre les chemins directs et réverbérants a été étudiée. La dernière partie de cette thèse porte sur des validations expérimentales et applications pratiques de la méthode. Une antenne sphérique constituée d'une sphère rigide et 31 microphones a été construite pour les tests expérimentaux. Plusieurs validations académiques ont été réalisées dans des environnements semi-anéchoïques, et nous ont illustré les avantages et limites de la méthode. Enfin, l'approche a été testé dans une application pratique, qui a consisté à identifier les sources de bruit ou faiblesses acoustiques à l'intérieur d'un bus
This thesis is concerned with the problem of noise source identification in closed spaces. The main motivation was to propose a technique which allows to locate and quantify noise sources within industrial vehicles, in a time-effective manner. In turn, the technique might be used by manufacturers for noise abatement purposes such as to provide quieter vehicles. A simplified model based on the equivalent source formulation was used to tackle the problem. It was shown that the problem is ill-conditioned, in the sense that it is very sensitive to errors in measurement data, thus regularization techniques were required. A detailed study of this issue, in particular the tuning of the so-called regularization parameter, was of importance to ensure the stability of the solution. In particular, a Bayesian regularization criterion was shown to be a very robust approach to optimally adjust the regularization parameter in an automated way. The target application concerns very large interior environments, which imposes additional difficulties, namely: (a) the positioning of the measurement array inside the enclosure; (b) a number of unknowns ("candidate" sources) much larger than the number of measurement positions. An iterative weighted formulation was then proposed to overcome the above issues by: first correct for the positioning of the array within the enclosure and second iteratively solve the problem in order to obtain a correct source quantification. In addition, the iterative approach has provided results with an enhanced spatial resolution and dynamic range. Several numerical studies have been carried out to validate the method as well as to evaluate its sensitivity to modeling errors. In particular, it was shown that the approach is affected by non-anechoic conditions, in the sense that reflections are identified as "real" sources. A post-processing technique which helps to distinguish between direct and reverberant paths has been discussed. The last part of the thesis was concerned with experimental validations and practical applications of the method. A custom spherical array consisting of a rigid sphere and 31 microphones has been built for the experimental tests. Several academic experimental validations have been carried out in semi-anechoic environments, which illustrated the advantages and limits of the method. Finally, the approach was tested in a practical application, which consisted in identifying noise sources inside a bus at driving conditions
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Wei. "Hough transforms for shape identification and applications im medical image processing /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3115568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Fromenteze, Thomas. "Développement d'une technique de compression passive appliquée à l'imagerie microonde." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0061/document.

Full text
Abstract:
Ces travaux portent sur le développement d'une technique de compression appliquée à la simplification des systèmes d'imagerie dans le domaine microonde. Cette approche repose sur le développement de composants passifs capables de compresser les ondes émises et reçues, autorisant ainsi une réduction du nombre de modules actifs nécessaires au fonctionnement de certaines architectures de radars. Ce principe est basé sur l'exploitation de la diversité modale présente dans les composants développés, le rendant compatible avec l'utilisation de très larges bandes passantes. Plusieurs preuves de concept sont réalisées au moyen de différents composants étudiés dans cet ouvrage, permettant d'adapter cette technique à de nombreuses spécifications d'architectures et de bandes passantes
This work is focused on the development of a compressive technique applied to the simplification of microwave imaging systems. This principle is based on the study of passive devices able to compress transmitted and received waves, allowing for the reduction of the hardware complexity required by radar systems. This approach exploits the modal diversity in the developed components, making it compatible with ultra wide bandwidth. Several proofs of concept are presented using different passive devices, allowing this technique to be adapted to a large variety of architectures and bandwidths
APA, Harvard, Vancouver, ISO, and other styles
33

Zeitler, Armin. "Investigation of mm-wave imaging and radar systems." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00832647.

Full text
Abstract:
In the last decade, microwave and millimeter-wave systems have gained importance in civil and security applications. Due to an increasing maturity and availability of circuits and components, these systems are getting more compact while being less expensive. Furthermore, quantitative imaging has been conducted at lower frequencies using computational intensive inverse problem algorithms. Due to the ill-posed character of the inverse problem, these algorithms are, in general, very sensitive to noise: the key to their successful application to experimental data is the precision of the measurement system. Only a few research teams investigate systems for imaging in the W-band. In this manuscript such a system is presented, designed to provide scattered field data to quantitative reconstruction algorithms. This manuscript is divided into six chapters. Chapter 2 describes the theory to compute numerically the scattered fields of known objects. In Chapter 3, the W-band measurement setup in the anechoic chamber is shown. Preliminary measurement results are analyzed. Relying on the measurement results, the error sources are studied and corrected by post-processing. The final results are used for the qualitative reconstruction of all three targets of interest and to image quantitatively the small cylinder. The reconstructed images are compared in detail in Chapter 4. Close range imaging has been investigated using a vector analyzer and a radar system. This is described in Chapter 5, based on a future application, which is the detection of FOD on airport runways. The conclusion is addressed in Chapter 6 and some future investigations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Guerrero, prado Patricio. "Reconstruction tridimensionnelle des objets plats du patrimoine à partir du signal de diffusion inélastique." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV035/document.

Full text
Abstract:
La caractérisation tridimensionnelle de matériaux anciens plats est restée une activité non évidente à accomplir par des méthodes classiques de tomographie à rayons X en raison de leur morphologie anisotrope et de leur géométrie aplatie.Pour surmonter les limites de ces méthodologies, une modalité d'imagerie basée sur le rayonnement diffusé Compton est étudiée dans ce travail. La tomographie classique aux rayons X traite les données de diffusion Compton comme du bruit ajouté au processus de formation d'image, tandis que dans la tomographie du rayonnement diffusé, les conditions sont définies de sorte que la diffusion inélastique devienne le phénomène dominant dans la formation d'image. Dans ces conditions, les rotations relatives entre l'échantillon et la configuration d'imagerie ne sont plus nécessaires. Mathématiquement, ce problème est résolu par la transformée de Radon conique. Le problème direct où la sortie du système est l'image spectrale obtenue à partir d'un objet d'entrée est modélisé. Dans le problème inverse une estimation de la distribution tridimensionnelle de la densité électronique de l'objet d'entrée à partir de l'image spectrale est proposée. La faisabilité de cette méthodologie est supportée par des simulations numériques
Three-dimensional characterization of flat ancient material objects has remained a challenging activity to accomplish by conventional X-ray tomography methods due to their anisotropic morphology and flattened geometry.To overcome the limitations of such methodologies, an imaging modality based on Compton scattering is studied in this work. Classical X-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able to avoid relative rotations between the sample and the imaging setup. Mathematically this problem is addressed by means of the conical Radon transform. A model of the direct problem is presented where the output of the system is the spectral image obtained from an input object. The inverse problem is addressed to estimate the 3D distribution of the electronic density of the input object from the spectral image. The feasibility of this methodology is supported by numerical simulations
APA, Harvard, Vancouver, ISO, and other styles
35

Rückert, Nadja [Verfasser], Bernd [Akademischer Betreuer] Hofmann, Bernd [Gutachter] Hofmann, and Christine [Gutachter] Böckmann. "Studies on two specific inverse problems from imaging and finance / Nadja Rückert ; Gutachter: Bernd Hofmann, Christine Böckmann ; Betreuer: Bernd Hofmann." Chemnitz : Universitätsbibliothek Chemnitz, 2012. http://d-nb.info/1214244068/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sambolian, Serge. "Tomographie des pentes à cohérence cinématique fondée sur des solveurs eikonals et la méthode de l’état adjoint : Théorie et applications à la construction de modèles de vitesses et localisation d'événements." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4002.

Full text
Abstract:
La construction du modèle de vitesse est crucial en imagerie sismique puisqu'elle contrôle la précision avec laquelle des méthodes d'imagerie haute résolution telles que la migration ou l'inversion des formes d'ondes complètes (FWI) peuvent imager le sous-sol. La stéréotomographie, une méthode de tomographie des pentes qui tire efficacement profit de la densité des données sismiques modernes, a été proposée comme une alternative aux approches classiques de tomographie en réflexion fondées sur le pointé d'horizons continus dans le volume sismique. La stéréotomographie est en revanche fondée sur le pointé semi-automatique d’événements localement cohérents, paramétrés par le temps double et les pentes aux sources et récepteurs et liés à des diffractants dans le sous-sol. Plus récemment, une variante de la stéréotomographie a été proposée en remplaçant le tracé de rai par un solveur eikonal dans le problème direct et l'inversion de la matrice des dérivés de Fréchet par la méthode de l'état adjoint dans le problème inverse. Cette nouvelle approche est massivement parallèle et de ce fait adaptée à des applications de grande dimension. Néanmoins, et de manière comparable à l'approche initiale, la position des diffractants et les paramètres du milieu sont conjointement mis à jour.Durant cette thèse, j'ai proposé une nouvelle formulation de la stéréotomographie qui gère plus efficacement le couplage vitesse-profondeur, inhérent aux approches en réflexion. Via une migration cinématique, je résous le problème de localisation et le projette dans le sous-problème principal de l'estimation des vitesses. Cette projection de variable garantit la consistance cinématique entre les deux classes de variables, consistance qui n'est pas garantie quand les deux classes de variables sont mis à jour conjointement. Par ailleurs, la projection de variable induit une paramétrisation compacte du problème inverse où une classe d'observables, en l’occurrence une pente, est utilisée pour mettre à jour une classe de paramètres, les vitesses. Je développe cette approche avec un solveur eikonal et la méthode de l'état adjoint pour des milieux TTI. Son évaluation sur deux cas d'étude synthétiques et réel confirme sa meilleur résilience au modèle initial et une vitesse de convergence plus rapide que l'approche conjointe.La stéréotomographie est principalement utilisée pour des dispositifs de sismique réflexion (flûte sismique) pour lesquels les sources et les récepteurs sont finement échantillonnés. Pour exploiter des dispositifs modernes à forts déports, j'ai introduit dans l'inversion les premières arrivées issues indifféremment de dispositifs de sismique réflexion multitrace ou de sismique grand-angle (OBN, OBC, terrestre). Dans un premier temps, j'ai illustré l'apport des pentes dans la tomographie des temps des premières arrivées (FATT) pour réduire l’ambiguïté temps-profondeur avec un cas synthétique et un cas réel sur la zone de subduction de Nankai. J'ai aussi évalué la tomographie des pentes en première arrivée pour construire un modèle initial pour la FWI avec un modèle complexe représentatif du Golfe du Mexique où la présence de sel génère de forts contrastes. J'ai pu illustrer la capacité de ma méthode à reconstruire les corps de sel tout en notant les difficultés héritées de l'éclairage incomplet de la zone située sous le sel. Cela m'a incité à combiner des données de sismique réflexion et des données grand-angle pour effectuer l'inversion conjointe des pentes et des temps de trajet des premières arrivées et des arrivées réfléchies pour bénéficier d'un éclairage angulaire optimal du milieu illustrée par des applications sur la zone de Nankai.Finalement, j'ai étendu l'utilisation de ma méthode par projection de variable pour localiser l'hypocentre des séismes en utilisant l'estimation des vitesses et des temps origine comme proxys. Cette approche originale a été validée avec deux exemples synthétiques
Velocity model building is a key step of seismic imaging since inferring high-resolution subsurface model by migration or full waveform inversion (FWI) is highly dependent on the kinematic accuracy of the retrieved velocity model. Stereotomography, a slope tomographic method that exploits well the density of the data, was proposed as an alternative to conventional reflection traveltime tomography. The latter is based on interpretive tracking of laterally-continuous reflections in the data volume whereas stereotomography relies on automated picking of locally coherent events. The densely picked attributes, namely the traveltimes and their spatial derivatives with respect to the source and receiver positions, are tied to scatterers in depth. More recently, a slope tomography variant was proposed under a framework based on eikonal solvers as an alternative to ray tracing and the adjoint-state method instead of Fréchet-derivative matrix inversion. This revamped stereotomography provides a scalable and flexible framework for large-scale applications. On the other hand, similarly to previous works, the scatterer positions and the subsurface parameters are updated jointly. In this thesis, I propose a new formulation of slope tomography that handles more effectively the ill-famed velocity-position coupling inherently present in reflection tomography. Through a kinematic migration, the scatterer position sub-problem is solved and projected into the main sub-problem for wavespeed estimation. Enforcing the kinematic consistency between the two kinds of variable, that is not guaranteed in the joint inversion, mitigates the ill-posedness generated by the velocity-position coupling. This variable projection leads to a reduced-parametrization inversion where the residuals of a single data class being a slope are minimized to update the subsurface parameters.I introduce this parsimonious strategy in the framework of eikonal solvers and the adjoint-state method for tilted transversely isotropic (TTI) media. I benchmark the method against the Marmousi model and present a field data case study previously tackled with the joint inversion strategy. Both case studies confirm that the parsimonious approach leads to a better-posed problem, with an improved robustness to the initial guess and convergence speed.Slope tomography is mainly used for streamer data due to the requirement of finely-sampled sources and receivers. To exploit cutting-edge long-offset datasets, I involve in the inversion first arrivals extracted from streamer or ocean bottom seismometer data. Before showing the complementarity between reflections and first arrivals, I examine the added value of introducing slopes in first-arrival traveltime tomography (FATT). Using a FWI workflow for quality control, I show with the Overthrust benchmark and a real data case study from the Nankai trough (Japan) how the joint inversion of slopes and traveltimes mitigates the ill-posedness of FATT. I also examine with the BP Salt model the limits of FATT to build an initial model for FWI in complex media. The results show how tomography suffers even with proper undershooting of the imaging targets due to the poor illumination of the subsalt area. On a crustal-scale benchmark, I first show the limits of reflection slope tomography induced by the limited streamer length before highlighting the added-value of the joint inversion of first-arrival and reflection picks.Finally, I introduce the same variable projection technique to tackle the velocity-hypocenter problem, which finds application in earthquake seismology and microseismic imaging. I propose a formulation where the hypocenter is located through the inversion of subsurface parameters and an origin time correction, both of them being used as a proxy and validate the proof of concept on two synthetic examples
APA, Harvard, Vancouver, ISO, and other styles
37

Henriksson, Tommy. "CONTRIBUTION TO QUANTITATIVE MICROWAVE IMAGING TECHNIQUES FOR BIOMEDICAL APPLICATIONS." Doctoral thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5882.

Full text
Abstract:
This dissertation presents a contribution to quantitative microwave imaging for breast tumor detection. The study made in the frame of a joint supervision Ph.D. thesis between University Paris-SUD 11 (France) and Mälardalen University (Sweden), has been conducted through two experimental microwave imaging setups, the existing 2.45 GHz planar camera (France) and the multi-frequency flexible robotic system, (Sweden), under development. In this context a 2D scalar flexible numerical tool based on a Newton-Kantorovich (NK) scheme, has been developed. Quantitative microwave imaging is a three dimensional vectorial nonlinear inverse scattering problem, where the complex permittivity of an object is reconstructed from the measured scattered field, produced by the object. The NK scheme is used in order to deal with the nonlinearity and the ill-posed nature of this problem. A TM polarization and a two dimensional medium configuration have been considered in order to avoid its vectorial aspect. The solution is found iteratively by minimizing the square norm of the error with respect to the scattered field data. Consequently, the convergence of such iterative process requires, at least two conditions. First, an efficient calibration of the experimental system has to be associated to the minimization of model errors. Second, the mean square difference of the scattered field introduced by the presence of the tumor has to be large enough, according to the sensitivity of the imaging system. The existing planar camera associated to a flexible 2D scalar NK code, are considered as an experimental platform for quantitative breast imaging. A preliminary numerical study shows that the multi-view planar system is quite efficient for realistic breast tumor phantoms, according to its characteristics (frequency, planar geometry and water as a coupling medium), as long as realistic noisy data are considered. Furthermore, a multi-incidence planar system, more appropriate in term of antenna-array arrangement, is proposed and its concept is numerically validated. On the other hand, an experimental work which includes a new fluid-mixture for the realization of a narrow band cylindrical breast phantom, a deep investigation in the calibration process and model error minimization, is presented. This conducts to the first quantitative reconstruction of a realistic breast phantom by using multi-view data from the planar camera. Next, both the qualitative and quantitative reconstruction of 3D inclusions into the cylindrical breast phantom, by using data from all the retina, are shown and discussed. Finally, the extended work towards the flexible robotic system is presented.
A dissertation prepared through an international convention for a joint supervision thesis with Université Paris-SUD 11, France
Microwaves in biomedicine
APA, Harvard, Vancouver, ISO, and other styles
38

Moffitt, Michael Adam. "Functional Imaging of the Mammalian Spinal Cord." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081363883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gunnarsson, Tommy. "MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detection." Licentiate thesis, Västerås : Department of Computer Science and Electronics, Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wörmann, Julian [Verfasser], Martin [Akademischer Betreuer] Kleinsteuber, Martin [Gutachter] Kleinsteuber, and Walter [Gutachter] Stechele. "Structured Co-sparse Analysis Operator Learning for Inverse Problems in Imaging / Julian Wörmann ; Gutachter: Martin Kleinsteuber, Walter Stechele ; Betreuer: Martin Kleinsteuber." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1205069437/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bendjador, Hanna. "Correction d'aberrations et quantification de vitesse du son en imagerie ultrasonore ultrarapide." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLS011.

Full text
Abstract:
L’imagerie échographique repose sur la transmission de signaux ultrasonores à travers les tissus biologiques et l’analyse des échos rétro-diffusés. Donnant accès à des phénomènes physiologiques au-delà de 10 000 images par seconde, l’échographie ultrarapide a permis le développement de techniques inédites telles que l’imagerie de l’élasticité des organes ou la quantification ultrasensible des flux sanguins. Le front d’onde acoustique, lors de sa propagation à travers des milieux complexes ou hétérogènes peut toutefois subir de fortes déformations ; affectant tant la qualité de l’image, que les informations quantitatives sur le milieu. Corriger de telles aberrations est l’enjeu majeur des travaux de recherche effectués au cours de cette thèse. En étudiant les propriétés statistiques des interférences entre les diffuseurs, un formalisme mathématique a été développé pour optimiser la cohérence angulaire des signaux rétro-diffusés. Ainsi parvient- on, pour la première fois en temps réel, à corriger les images et quantifier localement la vitesse du son. Cette dernière constitue un bio-marqueur inédit dans les exemples de la stéatose hépatique, et possiblement de la séparation des substances blanche et grise du cerveau. La méthode de correction de phase proposée va également être un apport intéressant aux corrections de mouvement dans le cas de la tomographie 3D et de l’imagerie vasculaire, offrant de nouvelles perspectives à l’imagerie ultrasonore
Echography relies on the transmission of ultrasound signals through biological tissues, and the processing of backscattered echoes. The rise of ultrafast ultrasound imaging gave access to physiological events faster than 10 000 frames per second. It allowed therefore the development of high-end techniques such as organs elasticity imaging or sensitive quantification of blood flows. During its propagation through complex or heterogeneous media, the acoustic wavefront may still suffer strong distorsions; hindering both the image quality and the ensuing quantitative assessments. Correcting such aberrations is the ultimate goal of the research work conducted during this PhD. By studying statistical properties of interferences between scatterers, a matrix formalism has been developed to optimise the angular coherence of backscattered echoes. Importantly, we succeeded for the first time, in correcting images and quantifying locally the speed of sound at ultrafast frame rates. Sound speed was proven to be a unique biomarker in the example of hepatic steatosis, and possibly separation of brain white and black matter. The phase correction method will be an interesting contribution to motion correction in the case of 3D tomography and vascular imaging, offering thus new horizons to ultrasound imaging
APA, Harvard, Vancouver, ISO, and other styles
42

Salahieh, Basel, Jeffrey J. Rodriguez, Sean Stetson, and Rongguang Liang. "Single-image full-focus reconstruction using depth-based deconvolution." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2016. http://hdl.handle.net/10150/624372.

Full text
Abstract:
In contrast with traditional extended depth-of-field approaches, we propose a depth-based deconvolution technique that realizes the depth-variant nature of the point spread function of an ordinary fixed-focus camera. The developed technique brings a single blurred image to focus at different depth planes which can be stitched together based on a depth map to output a full-focus image. Strategies to suppress the deconvolution's ringing artifacts are implemented on three levels: block tiling to eliminate boundary artifacts, reference maps to reduce ringing initiated by sharp edges, and depth-based masking to mitigate artifacts raised by neighboring depth-transition surfaces. The performance is validated numerically for planar and multidepth objects. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
APA, Harvard, Vancouver, ISO, and other styles
43

Wonus, Julie L. (Julie Lynn). "A circuit model for diffusive breast imaging and a numerical algorithm for its inverse problem." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38172.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 67-70).
by Julie L. Wonus.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
44

Dekdouk, Bachir. "Image reconstruction of low conductivity material distribution using magnetic induction tomography." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/image-reconstruction-of-low-conductivity-material-distribution-using-magnetic-induction-tomography(44d6769d-59b1-44c2-a01e-835f8916f69c).html.

Full text
Abstract:
Magnetic induction tomography (MIT) is a non-invasive, soft field imaging modality that has the potential to map the electrical conductivity (σ) distribution inside an object under investigation. In MIT, a number of exciter and receiver coils are distributed around the periphery of the object. A primary magnetic field is emitted by each exciter, and interacts with the object. This induces eddy currents in the object, which in turn create a secondary field. This latter is coupled to the receiver coils and voltages are induced. An image reconstruction algorithm is then used to infer the conductivity map of the object. In this thesis, the application of MIT for volumetric imaging of objects with low conductivity materials (< 5 Sm-1) and dimensions < 1 m is investigated. In particular, two low conductivity applications are approached: imaging cerebral stroke and imaging the saline water in multiphase flows. In low conductivity applications, the measured signals are small and the spatial sensitivity is critically compromised making the associated inverse problem severely non-linear and ill-posed.The main contribution from this study is to investigate three non-linear optimisation techniques for solving the MIT inverse problem. The first two methods, namely regularised Levenberg Marquardt method and trust region Powell's Dog Leg method, employ damping and trust region strategies respectively. The third method is a modification of the Gauss Newton method and utilises a damping regularisation technique. An optimisation in the convergence and stability of the inverse solution was observed with these methods compared to standard Gauss Newton method. For such non linear treatment, re-evaluation of the forward problem is also required. The forward problem is solved numerically using the impedance method and a weakly coupled field approximation is employed to reduce the computation time and memory requirements. For treating the ill-posedness, different regularisation methods are investigated. Results show that the subspace regularisation technique is suitable for absolute imaging of the stroke in a real head model with synthetic data. Tikhonov based smoothing and edge preserving regularisation methods also produced successful results from simulations of oil/water. However, in a practical setup, still large geometrical and positioning noise causes a major problem and only difference imaging was viable to achieve a reasonable reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
45

Berdeu, Anthony. "Imagerie sans lentille 3D pour la culture cellulaire 3D." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS036/document.

Full text
Abstract:
Ce travail de thèse se situe à l’interface de deux domaines : la culture cellulaire en trois dimensions et l’imagerie sans lentille.Fournissant un protocole de culture cellulaire plus réaliste sur le plan physiologique, le passage des cultures monocouches (2D) à des cultures tridimensionnelles (3D) - via l’utilisation de gels extracellulaires dans lesquels les cellules peuvent se développer dans les trois dimensions - permet de faire de grandes avancées dans de nombreux domaines en biologie tels que l’organogénèse, l’oncologie et la médecine régénérative. Ces nouveaux objets à étudier crée un besoin en matière d’imagerie 3D.De son côté, l’imagerie sans lentille 2D fournit un moyen robuste, peu cher, sans marquage et non toxique, d’étudier les cultures cellulaires en deux dimensions sur de grandes échelles et sur de longues périodes. Ce type de microscopie enregistre l’image des interférences produites par l’échantillon biologique traversé par une lumière cohérente. Connaissant la physique de la propagation de la lumière, ces hologrammes sont rétro-propagés numériquement pour reconstruire l’objet recherché. L’algorithme de reconstruction remplace les lentilles absentes dans le rôle de la formation de l’image.Le but de cette thèse est de montrer la possibilité d’adapter cette technologie sans lentille à l’imagerie des cultures cellulaires en 3D. De nouveaux prototypes de microscopes sans lentille sont conçus en parallèle du développement d’algorithmes de reconstructions tomographiques dédiés.Concernant les prototypes, plusieurs solutions sont testées pour converger vers un schéma alliant deux conditions. La première est le choix de la simplicité d’utilisation avec une culture cellulaire en boîte de Petri standard et ne nécessitant aucune préparation spécifique ou aucun changement de contenant. Cette condition entraînant de fortes contraintes géométriques sur l’architecture, la deuxième est de trouver la meilleure couverture angulaire possible des angles d’éclairage. Enfin, une version adaptée aux conditions en incubateur est développée et testée avec succès.Concernant les algorithmes, quatre types de solutions sont proposés, basées sur le théorème de diffraction de Fourier classiquement utilisé en tomographie diffractive optique. Toutes cherchent à corriger deux problèmes inhérents au microscope sans lentille : l’absence de l’information de phase, le capteur n’étant sensible qu’à l’intensité de l’onde reçue, et la couverture angulaire limitée. Le premier algorithme se limite à remplacer la phase inconnue par celle d’une onde incidente plane. Rapide, cette méthode est néanmoins source de nombreux artefacts. La deuxième solution, en approximant l’objet 3D inconnu par un plan moyen, utilise les outils de la microscopie sans lentille 2D pour retrouver cette phase manquante via une approche inverse. La troisième solution consiste à implémenter une approche inverse régularisée sur l’objet 3D à reconstruire. C’est la méthode la plus efficace pour compenser les deux problèmes mentionnés, mais elle est très lente. La quatrième et dernière solution est basée sur un algorithme de type Gerchberg-Saxton modifié avec une étape de régularisation sur l’objet.Toutes ces méthodes sont comparées et testées avec succès sur des simulations numériques et des données expérimentales. Des comparaisons avec des acquisitions au microscope classique montrent la validité des reconstructions en matière de tailles et de formes des objets reconstruits ainsi que la précision de leur positionnement tridimensionnel. Elles permettent de reconstruire des volumes de plusieurs dizaines de millimètres cubes de cultures cellulaires 3D, inaccessibles en microscopie standard.Par ailleurs, les données spatio-temporelles obtenues avec succès en incubateur montrent aussi la pertinence de ce type d’imagerie en mettant en évidence des interactions dynamiques sur de grandes échelles des cellules entres elles ainsi qu’avec leur environnement tridimensionnel
This PhD work is at the interface of two fields: 3D cell culture and lens-free imaging.Providing a more realistic cell culture protocol on the physiological level, switching from single-layer (2D) cultures to three-dimensional (3D) cultures - via the use of extracellular gel in which cells can grow in three dimensions - is at the origin of several breakthroughs in several fields such as developmental biology, oncology and regenerative medicine. The study of these new 3D structures creates a need in terms of 3D imaging.On another side, 2D lens-free imaging provides a robust, inexpensive, non-labeling and non-toxic tool to study cell cultures in two dimensions over large scales and over long periods of time. This type of microscopy records the interferences produced by a coherent light scattered by the biological sample. Knowing the physics of the light propagation, these holograms are retro-propagated numerically to reconstruct the unknown object. The reconstruction algorithm replaces the absent lenses in the role of image formation.The aim of this PhD is to show the possibility of adapting this lens-free technology for imaging 3D cell culture. New lens-free microscopes are designed and built along with the development of dedicated tomographic reconstruction algorithms.Concerning the prototypes, several solutions are tested to finally converge to a scheme combining two conditions. The first requirement is the choice of simplicity of use with a cell culture in standard Petri dish and requiring no specific preparation or change of container. The second condition is to find the best possible angular coverage of lighting angles in regards of the geometric constraint imposed by the first requirement. Finally, an incubator-proof version is successfully built and tested.Regarding the algorithms, four major types of solutions are implemented, all based on the Fourier diffraction theorem, conventionally used in optical diffractive tomography. All methods aim to correct two inherent problems of a lens-free microscope: the absence of phase information, the sensor being sensitive only to the intensity of the incident wave, and the limited angular coverage. The first algorithm simply replaces the unknown phase with that of an incident plane wave. However, this method is fast but it is the source of many artifacts. The second solution tries to estimate the missing phase by approximating the unknown object by an average plane and uses the tools of the 2D lens-free microscopy to recover the missing phase in an inverse problem approach. The third solution consists in implementing a regularized inverse problem approach on the 3D object to reconstruct. This is the most effective method to deal with the two problems mentioned above but it is very slow. The fourth and last solution is based on a modified Gerchberg-Saxton algorithm with a regularization step on the object.All these methods are compared and tested successfully on numerical simulations and experimental data. Comparisons with conventional microscope acquisitions show the validity of the reconstructions in terms of shape and positioning of the retrieved objects as well as the accuracy of their three-dimensional positioning. Biological samples are reconstructed with volumes of several tens of cubic millimeters, inaccessible in standard microscopy.Moreover, 3D time-lapse data successfully obtained in incubators show the relevance of this type of imaging by highlighting large-scale interactions between cells or between cells and their three-dimensional environment
APA, Harvard, Vancouver, ISO, and other styles
46

Ygouf, Marie. "Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00843202.

Full text
Abstract:
Ce travail de thèse porte sur l'imagerie multispectrale à haut contraste pour la détection et la caractérisation directe d'exoplanètes. Dans ce contexte, le développement de méthodes innovantes de traitement d'images est indispensable afin d'éliminer les tavelures quasi-statiques dans l'image finale qui restent à ce jour, la principale limitation pour le haut contraste. Bien que les aberrations résiduelles instrumentales soient à l'origine de ces tavelures, aucune méthode de réduction de données n'utilise de modèle de formation d'image coronographique qui prend ces aberrations comme paramètres. L'approche adoptée dans cette thèse comprend le développement, dans un cadre bayésien, d'une méthode d'inversion fondée sur un modèle analytique d'imagerie coronographique. Cette méthode estime conjointement les aberrations instrumentales et l'objet d'intérêt, à savoir les exoplanètes, afin de séparer correctement ces deux contributions. L'étape d'estimation des aberrations à partir des images plan focal (ou phase retrieval en anglais), est la plus difficile car le modèle de réponse instrumentale sur l'axe dont elle dépend est fortement non-linéaire. Le développement et l'étude d'un modèle approché d'imagerie coronographique plus simple se sont donc révélés très utiles pour la compréhension du problème et m'ont inspiré des stratégies de minimisation. J'ai finalement pu tester ma méthode et d'estimer ses performances en terme de robustesse et de détection d'exoplanètes. Pour cela, je l'ai appliquée sur des images simulées et j'ai notamment étudié l'effet des différents paramètres du modèle d'imagerie utilisé. J'ai ainsi démontré que cette nouvelle méthode, associée à un schéma d'optimisation fondé sur une bonne connaissance du problème, peut fonctionner de manière relativement robuste, en dépit des difficultés de l'étape de phase retrieval. En particulier, elle permet de détecter des exoplanètes dans le cas d'images simulées avec un niveau de détection conforme à l'objectif de l'instrument SPHERE. Ce travail débouche sur de nombreuses perspectives dont celle de démontrer l'utilité de cette méthode sur des images simulées avec des coronographes plus réalistes et sur des images réelles de l'instrument SPHERE. De plus, l'extension de la méthode pour la caractérisation des exoplanètes est relativement aisée, tout comme son extension à l'étude d'objets plus étendus tels que les disques circumstellaire. Enfin, les résultats de ces études apporteront des enseignements importants pour le développement des futurs instruments. En particulier, les Extremely Large Telescopes soulèvent d'ores et déjà des défis techniques pour la nouvelle génération d'imageurs de planètes. Ces challenges pourront très probablement être relevés en partie grâce à des méthodes de traitement d'image fondées sur un modèle direct d'imagerie.
APA, Harvard, Vancouver, ISO, and other styles
47

Zahran, Saeed. "Source localization and connectivity analysis of uterine activity." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2469.

Full text
Abstract:
La technique d'EHGI permet une reconstruction non invasive du potentiel électrique à la surface de l’utérus à partir du potentiel électrique mesuré à la surface du corps et des données anatomiques du torse. L’EHGI fournit des informations très précieuses sur l'état de l’utérus car il est capable de fournir une description spatiale raffinée de la voie et de l’amplitude des ondes électriques à la surface de l'utérus. Cela peut aider beaucoup dans différentes interventions cliniques. Les algorithmes scientifiques derrière tout outil EHGI sont capables de prétraiter les données anatomiques du patient afin de fournir un maillage informatique, de filtrer les mesures bruitées du potentiel électrique et de résoudre un problème inverse. Le problème inverse en électrohystérographie utérine (imagerie électrohystérographique (EHGI)) est une technique de diagnostic nouvelle et puissante. Cette technologie non invasive intéresse de plus en plus les industries médicales. Le succès de cette technologie serait considéré comme une percée dans le diagnostic de l'utérus. Cependant, dans de nombreux cas, la qualité du potentiel électrique reconstruit n’est pas suffisamment précise. La difficulté vient du fait que le problème inverse en électrohystérographie utérine est bien connu comme un problème mathématiquement mal posé. Différentes méthodes basées sur la régularisation de Thikhnov ont été utilisées afin de régulariser le problème. Nous avons mené notre analyse en utilisant un modèle d’utérus réaliste et avons cherché à identifier l’étendue spatiale des sources
The technique of EHGI allows a noninvasive reconstruction of the electrical potential on the uterus surface based on electrical potential measured on the body surface and anatomical data of the torso. EHGI provides very precious information about the uterus condition since it is able to provide refined spatial description of the electrical wave pathway and magnitude on the uterus surface. This may help a lot in different clinical interventions. The scientific algorithms behind any EHGI tool are able to preprocess the anatomical data of the patient in order to provide a computational mesh, filter noisy measurements of the electrical potential and solve an inverse problem. The inverse problem in uterus electrohysterography (electrohysterography imaging (EHGI)) is a new and a powerful diagnosis technique. This non-invasive technology interests more and more medical industries. The success of this technology would be considered as a breakthrough in the uterus diagnosis. However, in many cases the quality of reconstructed electrical potential is not accurate enough. The difficulty comes from the fact that the inverse problem in uterus electrohysterography is well known as a mathematically ill-posed problem. Different methods based on Thikhnov regularization have been used in order to regularize the problem. We have conducted our analysis by using a realistic uterus model and have aimed at identifying the spatial extent of the sources
APA, Harvard, Vancouver, ISO, and other styles
48

Shilling, Richard Zethward. "A multi-stack framework in magnetic resonance imaging." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33807.

Full text
Abstract:
Magnetic resonance imaging (MRI) is the preferred imaging modality for visualization of intracranial soft tissues. Surgical planning, and increasingly surgical navigation, use high resolution 3-D patient-specific structural maps of the brain. However, the process of MRI is a multi-parameter tomographic technique where high resolution imagery competes against high contrast and reasonable acquisition times. Resolution enhancement techniques based on super-resolution are particularly well suited in solving the problems of resolution when high contrast with reasonable times for MRI acquisitions are needed. Super-resolution is the concept of reconstructing a high resolution image from a set of low-resolution images taken at dierent viewpoints or foci. The MRI encoding techniques that produce high resolution imagery are often sub-optimal for the desired contrast needed for visualization of some structures in the brain. A novel super-resolution reconstruction framework for MRI is proposed in this thesis. Its purpose is to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery. The input data are multiple 2-D multi-slice Inversion Recovery MRI scans acquired at orientations with regular angular spacing rotated around a common axis. Inspired by the computed tomography domain, the reconstruction is a 3-D volume of isotropic high resolution, where the inversion process resembles a projection reconstruction problem. Iterative algorithms for reconstruction are based on the projection onto convex sets formalism. Results demonstrate resolution enhancement in simulated phantom studies, and in ex- and in-vivo human brain scans, carried out on clinical scanners. In addition, a novel motion correction method is applied to volume registration using an iterative technique in which super-resolution reconstruction is estimated in a given iteration following motion correction in the preceding iteration. A comparison study of our method with previously published methods in super-resolution shows favorable characteristics of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Nilsson, Lovisa. "Data-Driven Methods for Sonar Imaging." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176249.

Full text
Abstract:
Reconstruction of sonar images is an inverse problem, which is normally solved with model-based methods. These methods may introduce undesired artifacts called angular and range leakage into the reconstruction. In this thesis, a method called Learned Primal-Dual Reconstruction, which combines a data-driven and a model-based approach, is used to investigate the use of data-driven methods for reconstruction within sonar imaging. The method uses primal and dual variables inspired by classical optimization methods where parts are replaced by convolutional neural networks to iteratively find a solution to the reconstruction problem. The network is trained and validated with synthetic data on eight models with different architectures and training parameters. The models are evaluated on measurement data and the results are compared with those from a purely model-based method. Reconstructions performed on synthetic data, where a ground truth image is available, show that it is possible to achieve reconstructions with the data-driven method that have less leakage than reconstructions from the model-based method. For reconstructions performed on measurement data where no ground truth is available, some variants of the learned model achieve a good result with less leakage.
APA, Harvard, Vancouver, ISO, and other styles
50

Uh, Jinsoo. "Nuclear magnetic resonance imaging and analysis for determination of porous media properties." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4899.

Full text
Abstract:
Advanced nuclear magnetic resonance (NMR) imaging methodologies have been developed to determine porous media properties associated with fluid flow processes. This dissertation presents the development of NMR experimental and analysis methodologies, called NMR probes, particularly for determination of porosity, permeability, and pore-size distributions of porous media while the developed methodologies can be used for other properties. The NMR relaxation distribution can provide various information about porous systems having NMR active nuclei. The determination of the distribution from NMR relaxation data is an ill-posed inverse problem that requires special care, but conventionally the problem has been solved by ad-hoc methods. We have developed a new method based on sound statistical theory that suitably implements smoothness and equality/inequality constraints. This method is used for determination of porosity distributions. A Carr-Purcell-Meiboom-Gill (CPMG) NMR experiment is designed to measure spatially resolved NMR relaxation data. The determined relaxation distribution provides the estimate of intrinsic magnetization which, in turn, is scaled to porosity. A pulsed-field-gradient stimulated-echo (PFGSTE) NMR velocity imaging experiment is designed to measure the superficial average velocity at each volume element. This experiment measures velocity number distributions as opposed to the average phase shift, which is conventionally measured, to suitably quantify the velocities within heterogeneous porous media. The permeability distributions are determined by solving the inverse problem formulated in terms of flow models and the velocity data. We present new experimental designs associated with flow conditions to enhance the accuracy of the estimates. Efforts have been put forth to further improve the accuracy by introducing and evaluating global optimization methods. The NMR relaxation distribution can be scaled to a pore-size distribution once the surface relaxivity is known. We have developed a new method, which avoids limitations on the range of time for which data may be used, to determine surface relaxivity by the PFGSTE NMR diffusion experiment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography