Dissertations / Theses on the topic 'Problèmes inverses en imagerie'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Problèmes inverses en imagerie.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Buhan, Maya de. "Problèmes inverses et simulations numériques en viscoélasticité 3D." Paris 6, 2010. http://www.theses.fr/2010PA066379.
Baussard, Alexandre. "Résolution de problèmes inverses non linéaires : applications en imagerie à ondes électromagnétiques." Cachan, Ecole normale supérieure, 2002. http://www.theses.fr/2002DENSA010.
Debarnot, Valentin. "Microscopie computationnelle." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30156.
The contributions of this thesis are numerical and theoretical tools for the resolution of blind inverse problems in imaging. We first focus in the case where the observation operator is unknown (e.g. microscopy, astronomy, photography). A very popular approach consists in estimating this operator from an image containing point sources (microbeads or fluorescent proteins in microscopy, stars in astronomy). Such an observation provides a measure of the impulse response of the degradation operator at several points in the field of view. Processing this observation requires robust tools that can rapidly use the data. We propose a toolbox that estimates a degradation operator from an image containing point sources. The estimated operator has the property that at any location in the field of view, its impulse response is expressed as a linear combination of elementary estimated functions. This makes it possible to estimate spatially invariant (convolution) and variant (product-convolution expansion) operators. An important specificity of this toolbox is its high level of automation: only a small number of easily accessible parameters allows to cover a large majority of practical cases. The size of the point source (e.g. bead), the background and the noise are also taken in consideration in the estimation. This tool, coined PSF-estimator, comes in the form of a module for the Fiji software, and is based on a parallelized implementation in C++. The operators generated by an optical system are usually changing for each experiment, which ideally requires a calibration of the system before each acquisition. To overcome this, we propose to represent an optical system not by a single operator (e.g. convolution blur with a fixed kernel for different experiments), but by subspace of operators. This set allows to represent all the possible states of a microscope. We introduce a method for estimating such a subspace from a collection of low rank operators (such as those estimated by the toolbox PSF-Estimator). We show that under reasonable assumptions, this subspace is low-dimensional and consists of low rank elements. In a second step, we apply this process in microscopy on large fields of view and with spatially varying operators. This implementation is possible thanks to the use of additional methods to process real images (e.g. background, noise, discretization of the observation).The construction of an operator subspace is only one step in the resolution of blind inverse problems. It is then necessary to identify the degradation operator in this set from a single observed image. In this thesis, we provide a mathematical framework to this operator identification problem in the case where the original image is constituted of point sources. Theoretical conditions arise from this work, allowing a better understanding of the conditions under which this problem can be solved. We illustrate how this formal study allows the resolution of a blind deblurring problem on a microscopy example.[...]
Julliand, Thibault. "Automatic noise-based detection of splicing in digital images." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC2057.
In this dissertation, we offer three new forensics imagery methods to detect splicing in digital images by exploiting image noise statistics. To do so, we introduce a new tool, the noise density histogram, and its derivative, the noise density contribution histogram. Our methods allow splicing detection on both raw and JPEG images. Although the use of noise discrepancies to detect splicing has already been done multiple times, most existing methods tend to perform poorly on the current generation of high quality images, with high resolution and low noise. The effectiveness of our approaches are demonstrated over a large set of such images, with randomly-generated splicings. We also present a detailed analysis of the evolution of the noise in a digital camera, and how it affects various existing forensics approaches. In a final part, we use the tool we developed in a counter-forensics approach, in order to hide the trace left by splicing on the image noise
Mugnier, Laurent. "Problèmes inverses en Haute Résolution Angulaire." Habilitation à diriger des recherches, Université Paris-Diderot - Paris VII, 2011. http://tel.archives-ouvertes.fr/tel-00654835.
De, Buhan Maya. "Problèmes inverses et simulations numériques en viscoélasticité 3D." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00552111.
Guadarrama, Lili. "Imagerie en régime temporel." Phd thesis, Ecole Polytechnique X, 2010. http://pastel.archives-ouvertes.fr/pastel-00543301.
Cindea, Nicolae. "Problèmes inverses et contrôlabilité avec applications en élasticité et IRM." Phd thesis, Université Henri Poincaré - Nancy I, 2010. http://tel.archives-ouvertes.fr/tel-00750955.
El, Houari Karim. "Modélisation et imagerie électrocardiographiques." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S063/document.
The estimation of solutions of the inverse problem of Electrocardiography (ECG) represents a major interest in the diagnosis and catheter-based therapy of cardiac arrhythmia. The latter consists in non-invasively providing 3D images of the spatial distribution of cardiac electrical activity based on anatomical and electrocardiographic data. On the one hand, this problem is challenging due to its ill-posed nature. On the other hand, validation of proposed methods on clinical data remains very limited. Another way to proceed is by evaluating these methods performance on data simulated by a cardiac electrical model. For this application, existing models are either too complex or do not produce realistic cardiac patterns. As a first step, we designed a low-resolution heart-torso model that generates realistic cardiac mappings and ECGs in healthy and pathological cases. This model is built upon a simplified heart torso geometry and implements the monodomain formalism by using the Finite Element Method (FEM). Parameters were identified using an evolutionary approach and their influence were analyzed by a screening method. In a second step, a new approach for solving the inverse problem was proposed and compared to classical methods in healthy and pathological cases. This method uses a spatio-temporal a priori on the cardiac electrical activity and the discrepancy principle for finding an adequate regularization parameter
Lazzaretti, Marta. "Algorithmes d'optimisation dans des espaces de Banach non standard pour problèmes inverses en imagerie." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4009.
This thesis focuses on the modelling, the theoretical analysis and the numerical implementation of advanced optimisation algorithms for imaging inverse problems (e.g,., image reconstruction in computed tomography, image deconvolution in microscopy imaging) in non-standard Banach spaces. It is divided into two parts: in the former, the setting of Lebesgue spaces with a variable exponent map L^{p(cdot)} is considered to improve adaptivity of the solution with respect to standard Hilbert reconstructions; in the latter a modelling in the space of Radon measures is used to avoid the biases observed in sparse regularisation methods due to discretisation.In more detail, the first part explores both smooth and non-smooth optimisation algorithms in reflexive L^{p(cdot)} spaces, which are Banach spaces endowed with the so-called Luxemburg norm. As a first result, we provide an expression of the duality maps in those spaces, which are an essential ingredient for the design of effective iterative algorithms.To overcome the non-separability of the underlying norm and the consequent heavy computation times, we then study the class of modular functionals which directly extend the (non-homogeneous) p-power of L^p-norms to the general L^{p(cdot)}. In terms of the modular functions, we formulate handy analogues of duality maps, which are amenable for both smooth and non-smooth optimisation algorithms due to their separability. We thus study modular-based gradient descent (both in deterministic and in a stochastic setting) and modular-based proximal gradient algorithms in L^{p(cdot)}, and prove their convergence in function values. The spatial flexibility of such spaces proves to be particularly advantageous in addressing sparsity, edge-preserving and heterogeneous signal/noise statistics, while remaining efficient and stable from an optimisation perspective. We numerically validate this extensively on 1D/2D exemplar inverse problems (deconvolution, mixed denoising, CT reconstruction). The second part of the thesis focuses on off-the-grid Poisson inverse problems formulated within the space of Radon measures. Our contribution consists in the modelling of a variational model which couples a Kullback-Leibler data term with the Total Variation regularisation of the desired measure (that is, a weighted sum of Diracs) together with a non-negativity constraint. A detailed study of the optimality conditions and of the corresponding dual problem is carried out and an improved version of the Sliding Franke-Wolfe algorithm is used for computing the numerical solution efficiently. To mitigate the dependence of the results on the choice of the regularisation parameter, an homotopy strategy is proposed for its automatic tuning, where, at each algorithmic iteration checks whether an informed stopping criterion defined in terms of the noise level is verified and update the regularisation parameter accordingly. Several numerical experiments are reported on both simulated 2D and real 3D fluorescence microscopy data
Soulez, Ferréol. "Une approche problèmes inverses pour la reconstruction de données multi-dimensionnelles par méthodes d'optimisation." Phd thesis, Université Jean Monnet - Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00379735.
L'approche « problèmes inverses » consiste à rechercher les causes à partir des effets ; c'est-à-dire estimer les paramètres décrivant un système d'après son observation. Pour cela, on utilise un modèle physique décrivant les liens de causes à effets entre les paramètres et les observations. Le terme inverse désigne ainsi l'inversion de ce modèle direct. Seulement si, en règle générale, les mêmes causes donnent les mêmes effets, un même effet peut avoir différentes causes et il est souvent nécessaire d'introduire des a priori pour restreindre les ambiguïtés de l'inversion. Dans ce travail, ce problème est résolu en estimant par des méthodes d'optimisations, les paramètres minimisant une fonction de coût regroupant un terme issu du modèle de formation des données et un terme d'a priori.
Nous utilisons cette approche pour traiter le problème de la déconvolution aveugle de données multidimensionnelles hétérogène ; c'est-à-dire de données dont les différentes dimensions ont des significations et des unités différentes. Pour cela nous avons établi un cadre général avec un terme d'a priori séparable, que nous avons adapté avec succès à différentes applications : la déconvolution de données multi-spectrales en astronomie, d'images couleurs en imagerie de Bayer et la déconvolution aveugle de séquences vidéo bio-médicales (coronarographie, microscopie classique et confocale).
Cette même approche a été utilisée en holographie numérique pour la vélocimétrie par image de particules (DH-PIV). Un hologramme de micro-particules sphériques est composé de figures de diffraction contenant l'information sur la la position 3D et le rayon de ces particules. En utilisant un modèle physique de formation de l'hologramme, l'approche « problèmes inverses » nous a permis de nous affranchir des problèmes liées à la restitution de l'hologramme (effet de bords, images jumelles...) et d'estimer les positions 3D et le rayon des particules avec une précision améliorée d'au moins un facteur 5 par rapport aux méthodes classiques utilisant la restitution. De plus, nous avons pu avec cette méthode détecter des particules hors du champs du capteur élargissant ainsi le volume d'intérêt d'un facteur 16.
Ravon, Gwladys. "Problèmes inverses pour la cartographie optique cardiaque." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0118/document.
Since the 80's optical mapping has become an important tool for the study and the understanding of cardiac arythmias. This experiment allows the visualization of fluorescence fluxes through tissue surface. The fluorescence is directly related to the transmembrane potential. Information about its three-dimension distribution is hidden in the data on the surfaces. Our aim is to exploit this surface measurements to reconstruct the depolarization front in the thickness. For that purpose we developed a method based on the resolution of an inverse problem. The forward problem is made of two diffusion equations and the parametrization of the wavefront. The inverse problem resolution enables the identification of the front characteristics. The method has been tested on in silico data with different ways to parameter the front (expanding sphere, eikonal equation). The obtained results are very satisfying, and compared to a method derived by Khait et al. [1]. Moving to experimental data put in light an incoherence in the model. We detail the possible causes we explored to improve the model : constant illumination, optical parameters, accuracy of the diffusion approximation. Several inverse problems are considered in this manuscript, that involves several cost functions and associated gradients. For each case, the calculation of the gradient is explicit, often with the gradient method. The presented method was also applied on data other than cardiac optical mapping
Wahab, Abdul. "Modeling and imaging of attenuation in biological media." Palaiseau, Ecole polytechnique, 2011. https://theses.hal.science/docs/00/67/41/09/PDF/Manuscript.pdf.
The thesis is devoted to study inverse problems related to acoustic and elastic source localization in attenuating media from boundary measurements and their applications in biomedical imaging. We present efficient and stable algorithms to compensate for the effects of attenuation on image resolution. We develop Radon transform based algorithms to recover initial pressure distribution in attenuating media with and without imposed boundary conditions. We apply stationary phase theorem on an ill-conditioned attenuation operator to rectify attenuation effect and use TV-Tikhonov regularization methods to handle partial measurement problems. We revisit time reversal methods for loss-less media and extend them to attenuating media. As the attenuated waves are not time reversible, we use the strategy of back-propagating the regular approximations of adjoint attenuated waves to reconstruct sources stably with first order attenuation correction. For acoustic media, we present an alternative strategy based on data pre-processing for higher order corrections. As the data in elastic media consists of coupled shear and pressure waves, we propose an original approach based on weighted Helmholtz decomposition. Further, we introduce efficient weighted imaging algorithms for locating acoustic noise sources by cross correlation techniques and using regularized back-propagators for attenuation correction. We also localize spatially correlated noise sources and estimate correlation matrix between them. In order to extend elastic anomaly detection algorithms to visco-elastic media, we derive a closed form expression for an isotropic visco-elastic Green function. Then, we propose an attenuation correction technique for a quasi-incompressible medium and prove that one can access, approximately, the ideal (inviscid) Green function from the visco-elastic one by inverting an ordinary diff erential operator. Finally, we provide some anisotropic visco-elastic Green functions, with an aim to extend our results to anisotropic media
Gholami, Yaser. "Two-dimensional seismic imaging of anisotropic media by full waveform inversion." Nice, 2012. http://www.theses.fr/2012NICE4048.
Exploring the solid Earth for hydrocarbons, as social needs, is one of the main tasks of seismic imaging. As a domain of the modern geophysics, the seismic imaging by full waveform inversion (FWI) aims to improve and refine imaging of shallow and deep structures. Theoretically, the FWI method takes into account all the data gathered from subsurface in order to extract information about the physical parameter of the Earth. The kernel of the FWI is the full wave equation, which is considered in the heart of forward modeling engine. The FWI problem is represented as a least-squares local optimisation problem that retrieved the quantitative values of subsurface physical parameters. The seismic images are affected by the manifested anisotropy in the seismic data as anomalies in travel time, amplitude and waveform. In order to circumvent mis-focusing and mis-positioning events in seismic imaging and to obtain accurate model parameters, as valuable lithology indicators, the anisotropy needs to integrated in propagation-inversion workflows. In this context, the aim of this work is to develop two dimensional FWI for vertically transverse isotropic media (VTI). The physical parameters describing the Earth are elastic moduli or wave speeds and Thomsen parameter(s). The forward modeling and the inversion are performed entirely in frequency domain. The frequency-domain anisotropic P-SV waves propagation modelling is discretized by the finite element discontinuous Galerkin method. The full waveform modelling (FWM) is performed for VTI and titled transverse isotropic (TTI) media by various synthetic examples. The gradient of the misfit function is computed by adjoint-state method. The linearized inverse problem is solved with the quasi-Newton l-BFGS algorithm, which is able to compute an estimated Hessian matrix from a preconditionner and few gradients of previous iterations. Three categories of parametrization type are proposed in order to parametrize the model space of the inverse problem. The sensitivity analysis on acoustic VTI FWI method is preformed by studying the partial derivative of pressure wave field and the grid analysis of least-squares misfit functional. The conclusions inferred from the sensitivity analysis of least-squares misfit functional. The conclusions inferred from the sensitivity analysis are verified by FWI experimental on a simple synthetic model. The anisotropic parameter classes that can be well retrieved by VTI FWI are recognized. Furthermore, the acoustic VTI FWI is applied on the realistic synthetic Valhall benchmark for a wide-aperture surface acquisition survey. The anisotropic acoustic and elastic FWI are performed on the three components of ocean bottom cable (OBC) data sets from Valhall oil/gas field that is located in North Sea
Dai, Wei-Wen. "Études de la méthode des éléments frontière : développement d'un algorithme de reconstruction en imagerie d'impédance." Toulouse, INPT, 1994. http://www.theses.fr/1994INPT018H.
Berdeu, Anthony. "Imagerie sans lentille 3D pour la culture cellulaire 3D." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS036/document.
This PhD work is at the interface of two fields: 3D cell culture and lens-free imaging.Providing a more realistic cell culture protocol on the physiological level, switching from single-layer (2D) cultures to three-dimensional (3D) cultures - via the use of extracellular gel in which cells can grow in three dimensions - is at the origin of several breakthroughs in several fields such as developmental biology, oncology and regenerative medicine. The study of these new 3D structures creates a need in terms of 3D imaging.On another side, 2D lens-free imaging provides a robust, inexpensive, non-labeling and non-toxic tool to study cell cultures in two dimensions over large scales and over long periods of time. This type of microscopy records the interferences produced by a coherent light scattered by the biological sample. Knowing the physics of the light propagation, these holograms are retro-propagated numerically to reconstruct the unknown object. The reconstruction algorithm replaces the absent lenses in the role of image formation.The aim of this PhD is to show the possibility of adapting this lens-free technology for imaging 3D cell culture. New lens-free microscopes are designed and built along with the development of dedicated tomographic reconstruction algorithms.Concerning the prototypes, several solutions are tested to finally converge to a scheme combining two conditions. The first requirement is the choice of simplicity of use with a cell culture in standard Petri dish and requiring no specific preparation or change of container. The second condition is to find the best possible angular coverage of lighting angles in regards of the geometric constraint imposed by the first requirement. Finally, an incubator-proof version is successfully built and tested.Regarding the algorithms, four major types of solutions are implemented, all based on the Fourier diffraction theorem, conventionally used in optical diffractive tomography. All methods aim to correct two inherent problems of a lens-free microscope: the absence of phase information, the sensor being sensitive only to the intensity of the incident wave, and the limited angular coverage. The first algorithm simply replaces the unknown phase with that of an incident plane wave. However, this method is fast but it is the source of many artifacts. The second solution tries to estimate the missing phase by approximating the unknown object by an average plane and uses the tools of the 2D lens-free microscopy to recover the missing phase in an inverse problem approach. The third solution consists in implementing a regularized inverse problem approach on the 3D object to reconstruct. This is the most effective method to deal with the two problems mentioned above but it is very slow. The fourth and last solution is based on a modified Gerchberg-Saxton algorithm with a regularization step on the object.All these methods are compared and tested successfully on numerical simulations and experimental data. Comparisons with conventional microscope acquisitions show the validity of the reconstructions in terms of shape and positioning of the retrieved objects as well as the accuracy of their three-dimensional positioning. Biological samples are reconstructed with volumes of several tens of cubic millimeters, inaccessible in standard microscopy.Moreover, 3D time-lapse data successfully obtained in incubators show the relevance of this type of imaging by highlighting large-scale interactions between cells or between cells and their three-dimensional environment
Friedrich, Corentin. "Méthodes de reconstruction en tomographie de diffraction 3-D." Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0013.
This thesis is focused on microwave tomography. This imaging technique consists in estimating a three-dimensional mapping of the dielectric properties of an unknown volume from measurements of the electromagnetic field from a known incident wave and scattered by this volume. This is a promising technique that is used in various applications (medical imaging, geophysics, non- destructive testing,...) but suffers from high computational costs. This is a reason why microwave imaging is not widely used in industry. Microwave imaging is considered as an inverse problem, where the error between the measurements and a forward model that describes the scattered field is minimized as a function of the properties of the volume. This inverse problem is ill-posed because the number of unknowns is higher than the number of measurements. It is tackled through the minimization of a regularized least-squares cost function, which is addressed by local iterative optimization algorithms. Moreover, the forward model is non-linear. Thus,reconstruction is a difficult and expensive procedure. The computation of the objective function and its gradient requires the resolution of a high number of linear systems, which are performed at each iteration of the optimization algorithm and represent most of the computational cost. In this thesis, we propose to reduce the computational costs of the reconstruction algorithms by focusing on the resolution of these linear systems. Two contributions are presented. The first one is a procedure in order to reduce the number of linear systems depending on the configuration of the measurement setup. The second contribution offers an efficient way to speed up the resolutions of the systems. We adapt block resolution algorithms, in order to jointly solve multiple linear systems involving a common operator matrix. These methods are validated on simulated, realistic, 3D problems, and applied to the reconstruction of real objects from experimental measurements of scattered fields. satisfactory results are obtained, where the computation time can be reduced by a factor of two, in particular for the most difficult reconstruction problems
Abi, rizk Ralph. "High-resolution hyperspectral reconstruction by inversion of integral field spectroscopy measurements. Application to the MIRI-MRS infrared spectrometer of the James Webb Space Telescope." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG087.
This thesis deals with inverse problem approaches to reconstruct a 3D spatio-spectral image from a set of 2D infrared measurements provided by the Integral Field Spectrometer (IFS) instrument (Mid-Resolution Spectrometer: MRS) of the Mid-Infrared Instrument onboard the James Webb Space Telescope. The reconstruction is challenging because the IFS involves complex components that degrade the measurements: (1) the responses of the components are not perfect and introduce a wavelength-dependent spatial and spectral blurring, (2) the instrument considers several observations of the input with several spatial and spectral fields of views, (3) the output measurements are projected onto multiple 2D detectors and sampled with heterogeneous step sizes. The 3D image reconstruction is an ill-posed problem mainly due to spatio-spectral blurring and insufficient spatial sampling. To compensate for the loss of spatial information, the MRS allows multiple observations of the same scene by shifting the telescope pointing, leading to a multi-frame Super-Resolution (SR) problem. We propose an SR reconstruction algorithm that jointly processes the spatial and spectral information of the degraded 2D measurements following two main steps. First, we design a forward model that describes the response of the IFS instrument as a series of mathematical operators and establishes a relationship between the measurements and the unknown 3D input image. Next, the forward model is used to reconstruct the unknown input.The reconstruction is based on the regularized least square approach with a convex regularization for edge-preserving. We rely on the fast half-quadratic approaches based on Geman and Reynolds formulation to solve the problem. The proposed algorithm mainly includes a fusion step of measurements from different spatio-spectral observations with different blur and different sampling, a multi-frame Super-Resolution step from the different pointing of the instrument, and a deconvolution step to minimize the blurring. Another forward model for the same instrument is also developed in our work, by assuming that the 3D input image lives in a low dimensional subspace and can be modeled as a linear combination of spectral components, assumed known, weighted by unknown mixing coefficients, known as the Linear Mixing Model (LMM). We then rely on the Majorize-Minimize Memory Gradient (3MG) optimization algorithm to estimate the unknown mixing coefficients. The subspace approximation reduces the number of the unknowns. Consequently, the signal-to-noise ratio is increased. In addition, the LMM formulation with known spectral components allows preserving the complex spectral information of the reconstructed 3D image. The proposed reconstruction is tested on several synthetic HS images with different spatial and spectral distributions. Our algorithm shows a clear deconvolution and a significant improvement of the spatial and spectral resolutions of the reconstructed images compared to the state-of-art algorithms, particularly around the edges
Jolivet, Frederic. "Approches "problèmes inverses" régularisées pour l'imagerie sans lentille et la microscopie holographique en ligne." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES012/document.
In Digital Imaging, the regularized inverse problems methods reconstruct particular information from measurements and an image formation model. With an inverse problem that is ill-posed and illconditioned, and with the used image formation mode! having few constraints, it is necessary to introduce a priori conditions in order to restrict ambiguity for the inversion. This allows us to guide the reconstruction towards a satisfying solution. The works of the following thesis delve into the development of reconstruction algorithms of digital holograms based on large-scale optimization methods (smooth and non-smooth). This general framework allowed us to propose different approaches adapted to the challenges found with this unconventional imaging technique: the super-resolution, reconstruction outside the sensor's field, the color holography and finally, the quantitative reconstruction of phase abjects (i.e. transparent). For this last case, the reconstruction problem consists of estimating the complex 2D transmittance of abjects having absorbed and/or dephased the light wave during the recording of the hologram. The proposed methods are validated with the help of numerical simulations that are then applied on experimental data taken from the lensless imaging or from the in-line holographie microscopy (coherent imaging in transmission, with a microscope abject glass). The applications range from the reconstruction of opaque resolution sights, to the reconstruction of biological objects (bacteria), passing through the reconstruction of evaporating ether droplets from a perspective of turbulence study in fluid mechanics
Szasz, Teodora. "Advanced beamforming techniques in ultrasound imaging and the associated inverse problems." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30221/document.
Ultrasound (US) allows non-invasive and ultra-high frame rate imaging procedures at reduced costs. Cardiac, abdominal, fetal, and breast imaging are some of the applications where it is extensively used as diagnostic tool. In a classical US scanning process, short acoustic pulses are transmitted through the region-of-interest of the human body. The backscattered echo signals are then beamformed for creating radiofrequency(RF) lines. Beamforming (BF) plays a key role in US image formation, influencing the resolution and the contrast of final image. The objective of this thesis is to model BF as an inverse problem, relating the raw channel data to the signals to be recovered. The proposed BF framework improves the contrast and the spatial resolution of the US images, compared with the existing BF methods. To begin with, we investigated the existing BF methods in medical US imaging. We briefly review the most common BF techniques, starting with the standard delay-and-sum BF method and emerging to the most known adaptive BF techniques, such as minimum variance BF. Afterwards, we investigated the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well-known Bayesian Information Criteria used in statistical modeling. Furthermore, we propose a new way of addressing the BF in US imaging, by formulating it as a linear inverse problem relating the reflected echoes to the signal to be recovered. Our approach offers flexibility in the choice of statistical assumptions on the signal to be beamformed and it is robust to a reduced number of pulse emissions. At the end of this research, we investigated the use of the non-Gaussianity properties of the RF signals in the BF process, by assuming alpha-stable statistics of US images
Balocco, Simone. "3D dynamic ultrasonic model of pathologic artery : application to the assessment of arterial wall mechanical parameters." Lyon 1, 2006. http://www.theses.fr/2006LYO10145.
A mathematic multiphysics 3D model reproducing the biomechanical behavior of human vessels and the related echographic imaging has been developed. The geometry of the model is a multi layer structure based on right generalized cylinders (RGC) and enabling the representation of pathological and healthy vessel structures (stenoses and bifurcations). The blood flow has been simulated considering a dynamical displacement of the erythrocytes and a mechanical model enables the computation of the arterial wall pulsation due to the hydraulic flow pressure. An acoustic characterization of each region has been performed, and radiofrequency signals has been obtained in order to reproduce ultrasound images. The research project has been focused on two main application which represent respectively a simulation and a dignostic support tools
Barrière, Paul-André. "DÉVELOPPEMENT D'ALGORITHMES D'INVERSION RAPIDES ET PROPOSITIONS RELATIVES À LA CONFIGURATION DU MONTAGE DE MESURES DANS UN CONTEXTE DE TOMOGRAPHIE MICRO-ONDES APPLIQUÉE À LA DÉTECTION DU CANCER DU SEIN." Phd thesis, Ecole centrale de nantes - ECN, 2008. http://tel.archives-ouvertes.fr/tel-00390344.
Le temps nécessaire à la reconstruction d'une image étant critique pour d'éventuelles applications cliniques de la tomographie micro-ondes, nous proposons, en première partie, une série d'algorithmes qui offrent un coût de calcul diminué par rapport aux méthodes concurrentes. La méthode « current source inversion » (CSI) est utilisée comme point de départ de la réflexion. On identifie certaines faiblesses de cet algorithme et on en propose deux généralisations, plus rapides et plus robustes. Deux nouvelles familles de méthodes, s'attaquant à différents goulots d'étranglement des méthodes CSI généralisées, sont aussi proposées. Elles sont basées sur deux nouvelles formulations du problème direct. La première est mathématiquement équivalente à celle d'origine alors que la seconde est basée sur des approximations.
En ce qui a trait à la configuration du montage de mesures, on montre que la résolution des images reconstruites peut être significativement améliorée en ayant recours à la compression du sein. On propose aussi un montage qui exploite les propriétés des guides d'ondes diélectriques. Celui-ci permet de mesurer le champ dans l'air plutôt que dans un liquide d'adaptation, ce qui ouvre la porte au développement de montages plus compacts.
Paleo, Pierre. "Méthodes itératives pour la reconstruction tomographique régularisée." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT070/document.
In the last years, there have been a diversification of the tomography imaging technique for many applications. However, experimental constraints often lead to limited data - for example fast scans, or medical imaging where the radiation dose is a primary concern. The data limitation may come as a low signal to noise ratio, scarce views or a missing angle wedge.On the other hand, artefacts are detrimental to reconstruction quality.In these contexts, the standard techniques show their limitations.In this work, we explore how regularized tomographic reconstruction methods can handle these challenges.These methods treat the problem as an inverse problem, and the solution is generally found by the means of an optimization procedure.Implementing regularized reconstruction methods entails to both designing an appropriate regularization, and choosing the best optimization algorithm for the resulting problem.On the modelling part, we focus on three types of regularizers in an unified mathematical framework, along with their efficient implementation: Total Variation, Wavelets and dictionary-based reconstruction. On the algorithmic part, we study which state-of-the-art convex optimization algorithms are best fitted for the problem and parallel architectures (GPU), and propose a new algorithm for an increased convergence speed.We then show how the standard regularization models can be extended to take the usual artefacts into account, namely rings and local tomography artefacts. Notably, a novel quasi-exact local tomography reconstruction method is proposed
Zhang, Xiaoyun. "Développement d'un système d'imagerie qualitatif et quantitatif microonde pour le contrôle de l'écoulement de l'eau dans une colonne de sol." Aix-Marseille 1, 2010. http://theses.univ-amu.fr.lama.univ-amu.fr/2010AIX11015.pdf.
We present a new microwave probing technique which aims to monitor the variation of the soil water content at the depth of the crop roots. The dielectric permittivity plays a key role in this technique. Indeed, it is directly related to the soil water content and numerous models are formed to reveal their relationship. Meanwhile, it determines the interactions between the medium and the electromagnetic field which are governed by the Helmholtz equation. Besides, recovering the permittivity profile of the medium from the electromagnetic field is an inverse scattering problem. Therefore, one can access the map of water content in soil by solving an inverse scattering problem. A circular scanner which is designed for studying the 2D problem is employed to send and receiver back the electromagnetic signal which interacts with a soil column placed inside its cavity. The direct propagation problem corresponding to this configuration is formulated and solved with the FEM method. It serves to calibrate the imaging system and to generate the simulated data which will be useful to test the inversion methods discussed during this PhD work. In order to reduce the ill-posedness of the inverse scattering problem, we studied both qualitative imaging methods and quantitative ones. The support information of the scatterers provided by the qualitative methods may be of interest when used as an initial guess in the quantitative methods which can retrieve the refractive index of the medium. Three qualitative methods are studied, the DORT (Decomposition of the time reversal operator) method, the MUSIC (MUltiple-Signal-classification) method and the LSM (Linear sampling method) method. We also proposed the EDORT method which can cope with extended-size targets. Two quantitative methods based on the CG (Conjugate Gradient) minimization scheme are provided. The first one does not use any a priori information. The second one use some shape a priori information which is introduced into the inversion scheme by means of a modified Heaviside function coupled with a level set formalism. In order to test the ability of these inversion methods to monitor the variation of the soil water content, we simulate the water diffusive process in a soil column by solving the Richards equation and couple these simulations with the FEM software, and then the ‘measured’ fields with respect to time are obtained. The LSM method and the two quantitative methods are tested with this synthetic data. Besides, a controlled water diffusive process which occurs in a soil column was experimentally realized. The associated electromagnetic fields were measured using the circular scanner. The imaging results of the LSM method on the experimental data are also presented
Fadili, Jalal M. "Une exploration des problèmes inverses par les représentations parcimonieuses et l'optimisation non lisse." Habilitation à diriger des recherches, Université de Caen, 2010. http://tel.archives-ouvertes.fr/tel-01071774.
Rondeau, Xavier. "Imagerie à travers la turbulence : mesure inverse du front d'onde et centrage optimal." Phd thesis, Université Claude Bernard - Lyon I, 2007. http://tel.archives-ouvertes.fr/tel-00220467.
Bousse, Alexandre. "Problèmes inverses, application à la reconstruction compensée en mouvement en angiographie rotationnelle X." Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00361396.
Une fois le mouvement estimé, la reconstruction tomographique à un instant de référence est effectuée par une optimisation aux moindres-carrés qui inclut le mouvement ainsi qu'un terme de pénalité qui favorise les valeurs d'intensités fortes pour les voxels au voisinage de la ligne centrale 3-D, et les faibles valeurs pour les autres. Cette méthode a été testée sur des données simulées basées sur des lignes centrales 3-D préalablement extraites de données MSCT.
Ygouf, Marie. "Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00843202.
Fromenteze, Thomas. "Développement d'une technique de compression passive appliquée à l'imagerie microonde." Thesis, Limoges, 2015. http://www.theses.fr/2015LIMO0061/document.
This work is focused on the development of a compressive technique applied to the simplification of microwave imaging systems. This principle is based on the study of passive devices able to compress transmitted and received waves, allowing for the reduction of the hardware complexity required by radar systems. This approach exploits the modal diversity in the developed components, making it compatible with ultra wide bandwidth. Several proofs of concept are presented using different passive devices, allowing this technique to be adapted to a large variety of architectures and bandwidths
Seppecher, Laurent. "Modélisation de l'imagerie biomédicale hybride par perturbations mécaniques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01021279.
Bendjador, Hanna. "Correction d'aberrations et quantification de vitesse du son en imagerie ultrasonore ultrarapide." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLS011.
Echography relies on the transmission of ultrasound signals through biological tissues, and the processing of backscattered echoes. The rise of ultrafast ultrasound imaging gave access to physiological events faster than 10 000 frames per second. It allowed therefore the development of high-end techniques such as organs elasticity imaging or sensitive quantification of blood flows. During its propagation through complex or heterogeneous media, the acoustic wavefront may still suffer strong distorsions; hindering both the image quality and the ensuing quantitative assessments. Correcting such aberrations is the ultimate goal of the research work conducted during this PhD. By studying statistical properties of interferences between scatterers, a matrix formalism has been developed to optimise the angular coherence of backscattered echoes. Importantly, we succeeded for the first time, in correcting images and quantifying locally the speed of sound at ultrafast frame rates. Sound speed was proven to be a unique biomarker in the example of hepatic steatosis, and possibly separation of brain white and black matter. The phase correction method will be an interesting contribution to motion correction in the case of 3D tomography and vascular imaging, offering thus new horizons to ultrasound imaging
Morin, Renaud. "Amélioration de la résolution en imagerie ultrasonore." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2118/.
Ultrasound imaging is a medical imaging modality commonly involved in various therapeutic and monitoring diagnoses such as fetal growth, cancer detection or image-guided intervention. Despite its harmless, easy-to-use and cost-effective features, ultrasound imaging has some intrinsic limitations regarding its spatial resolution, especially compared to other modalities such as magnetic resonance imaging. Improving the spatial resolution of ultrasound images is an up-to-date challenge and many works have long studied instrumentation approaches dealing with the optimisation of the acquisition device. High resolution ultrasound imaging achieves this goal through the use of specific probes but is now facing physical and technological limitations. The goal of this thesis is to make use of post-processing techniques in order to circumvent the inherent constraints of instrumental approaches. In this framework, we present two approaches for the resolution enhancement of ultrasound images, depending on whether the available data is composed of an image sequence or a single image. In the former case, we show that the adaptation of a motion estimation technique originally proposed for elastography makes it possible to design an effective high-resolution reconstruction framework dedicated to ultrasound imaging. This approach is first assessed using a realistic simulation of ultrasound images and then used for the processing of in vivo data. In the latter case, dealing with the restoration of a single image, we develop two fast deconvolution methods for the resolution enhancement task. These approaches take into account, according to their availability, specific a priori information about the image acquisition process such as the system spatial impulse response. Results are performed with synthetic data and extended to in vivo ultrasound images
Irakarama, Modeste. "Towards Reducing Structural Interpretation Uncertainties Using Seismic Data." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0060/document.
Subsurface structural models are routinely used for resource estimation, numerical simulations, and risk management; it is therefore important that subsurface models represent the geometry of geological objects accurately. The first step in building a subsurface model is usually to interpret structural features, such as faults and horizons, from a seismic image; the identified structural features are then used to build a subsurface model using interpolation methods. Subsurface models built this way therefore inherit interpretation uncertainties since a single seismic image often supports multiple structural interpretations. In this manuscript, I study the problem of reducing interpretation uncertainties using seismic data. In particular, I study the problem of using seismic data to determine which structural models are more likely than others in an ensemble of geologically plausible structural models. I refer to this problem as "appraising structural models using seismic data". I introduce and formalize the problem of appraising structural interpretations using seismic data. I propose to solve the problem by generating synthetic data for each structural interpretation and then to compute misfit values for each interpretation; this allows us to rank the different structural interpretations. The main challenge of appraising structural models using seismic data is to propose appropriate data misfit functions. I derive a set of conditions that have to be satisfied by the data misfit function for a successful appraisal of structural models. I argue that since it is not possible to satisfy these conditions using vertical seismic profile (VSP) data, it is not possible to appraise structural interpretations using VSP data in the most general case. The conditions imposed on the data misfit function can in principle be satisfied for surface seismic data. In practice, however, it remains a challenge to propose and compute data misfit functions that satisfy those conditions. I conclude the manuscript by highlighting practical issues of appraising structural interpretations using surface seismic data. I propose a general data misfit function that is made of two main components: (1) a residual operator that computes data residuals, and (2) a projection operator that projects the data residuals from the data-space into the image-domain. This misfit function is therefore localized in space, as it outputs data misfit values in the image-domain. However, I am still unable to propose a practical implementation of this misfit function that satisfies the conditions imposed for a successful appraisal of structural interpretations; this is a subject for further research
Denneulin, Laurence. "Approche inverse pour la reconstruction des environnements circumstellaires en polarimétrie avec l'instrument d'imagerie directe ESO / VLT SPHERE IRDIS." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE1183.
Circumstellars environments observation is a key for the comprehension of planet formation. If the very large telescopes allow the resolution of these environments, their observation is difficult due to the high contrast between the environment and their host stars. In fact the host stars are 1000 to 10 000 times brighter than the environment, even 10 000 000 times brighter for exoplanets. When images of these circumstellar environnements are acquired in direct imaging, the signal of the environnements mixed to star light residuals. Yet, the light of the environment is partially linearly polarized while the light od the star is unpolarized. The instrument Infrared Dual-band Imaging and Spectroscopy (IRDIS) of the European Southern Observatory’s (ESO) Spectro-Polarimeter High-contrast Expolanet REsearch (SPHERE) instrument, installed at one of the four Very Large Telescopes (VLT) in Atacama in Chile, acquires datasets where the polarization is modulated according to a known angles cycle. It is then possible, by combinations of the data, to extract the polarized signal of the environment from the unpolarized residual light of the stars and unpolarized light of the disks. The stat-of-the-art methods to extract such signal do not take optimally into account the photon noise statistics of the data, which dominate the signal of interest, nor the read out noise of the detector. Moreover, if any image from a rotation cycle is missing, the rest of the cycle is not used. Finally, any centering and rotation of the data or deconvolution by the PSF is generally performed in separated steps from the data reduction. The bad pixels and dead pixels are interpolated before the processing. The consequence of such approach is that the propagation of the errors in the data is not controlled.The « inverse problem » methods allow such processing while controlling the error propagation in the reconstructions. These approaches have never been developed, so far, for high contrast direct imaging in polarimetry. My goal in this thesis is to optimally reconstruct, from the polarimetric data of the instrument ESO/VLT-SPHERE IRDIS, maps of the circumstellar environments polarized light, the ascociated polarization angles and the unpolarized star light residuals and circumstellar environments light. First, I develop a nonlinear physical model of the data, pixelwise independent, parametric in these quantities of interest, or linear with respect to the Stokes parameters, from which they can be estimated. Throughout this thesis, I complete the model by adding centering, rotations and convolutions, making it pixelwise dependent. The parameters are then estimated by the minimization of an objective function, derived from the co-log-likelyhood of the data, under some constraint, such as positivity constraint or epigraphical constraint, and regularizations as smooth and non-smooth Total Variation and the Shatten norm on the Hessian. This methods are all applied on simulated datasets, created to reproduce typical astrophysical datasets obtained in circumstellar environment polarimetrical direct imaging. Depending of the properties of the functions considered in the objective function, the research of its minimum is done with different algorithms as the Variable Metric Limited Memory and Bound algorithm, Forward-Backward with backtracking and the preconditioned primal-dual Condat-Vu algorithm with backtracking. I also use the Stein Unbiased Risk Estimator to auto-tune the weights of the regularization. In the results, I show that the use of a complete direct model of the data, taking in account the recentering, the rotations and the convolution and the estimation of its parameters from a constraint problem, taking in account the measure precision and the missing data reduces the error on the estimation maps in such astrophysics context
Liu, Xiang. "Contrôle non destructif du sol et imagerie d'objets enfouis par des systèmes bi- et multi-statiques : de l’expérience à la modélisation." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC067/document.
The work presented in this thesis deals with the resolutions of the direct and inverse problems of the ground radar (GPR). The objective is to optimize GPR’s performance and its imaging quality. A state of the art of ground radar is realized. It focused on simulation methods and imaging techniques applied in GPR. The study of the use of the discontinuous Galerkin (GD) method for the GPR simulation is first performed. Some scenarios complete of GPR are considered and the GD simulations are validated by comparing the same scenarios’ modeling with CST-MWS and the measurements. Then a study of inverse problem resolution using the Linear Sampling Method (LSM) for the GPR application is carried out. A study with synthetic data is first performed to test the reliability of the LSM. Then, the LSM is adapted for the GPR application by taking into account the radiation of antenna. Finally, a study is designed to validate the detectability of underground electrical cables junction with GPR in a real environment
Lencrerot, Raphaël. "Outils de modélisation et d'imagerie pour un scanner micro-onde : application au contrôle de la teneur en eau d'une colonne de sel." Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30034.
Soil moisture information is a key variable for describing water and energy exchanges at the plant root/surface/air interface. Intensive effort has been undertaken to provide non-invasive geophysical methods, in particular using microwave sensors. Indeed, the real and imaginary part of the soil dielectric constant is linked to its volumetric water content as well as its salinity. The goal of the current project is to demonstrate the potentiality of a non-invasive microwave imaging system for water content monitoring. The interaction between an electromagnetic. Wave and a target of the order of a wavelength results in a scattered wave, which depends on the target properties. The measurement of this field will allow to retreive information about the scatterer, in particular its dielectric permittivity. This study will exploit the experimental circular scanner currently developed at Institut Fresnel which presents a simplified configuration adequate for monoliths monitoring. Modelling and imaging numerical tools have been specifically created for such application. In particular, performing quantitative microwave imaging is a nonlinear ill-posed problem. Moreover, the measured fields are perturbated with noise. Therefore, all available a-priori information are of great importance to stabilize the solution. To this hand, we have implemented a Zernike polynomials representation and worked on the calibration procedure. The reconstructed permittivity maps obtained from experimental measured fields will be presented and discussed. We have also perform some theoretical studies based on the properties of the scattering operator, in order to gain quantitative ideas on the a priori information which are relevant to consider as well as the amount of information which were available
Bujoreanu, Denis. "Échographie compressée : Une nouvelle stratégie d’acquisition et de formation pour une imagerie ultrarapide." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI098/document.
It is beyond doubt that the relative low cost of ultrasound scanners, the quick procedure and the ability to image soft biological tissues helped ultrasound imaging to become one of the most common medical diagnostic tools. Unfortunately, ultrasound still has some drawbacks when compared to other medical imaging techniques mainly in terms of the provided image quality and details. In the quest for an improved image quality, usually, the price to pay is the drop in the frame acquisition rate. This deep rooted trade-off between the provided image quality and the acquisition time is perhaps one of the most challenging in today’s ultrasound research and its overcoming could lead to diagnostic improvements in the already existing ultrasound applications and even pave the way towards novel uses of echography. This study addresses the previously stated trade-off. Through a mix of such concepts as plane wave imaging, multiple-input /multiple-output systems and inverse problems, this work aims at acquiring ultrasound images of the insonified tissue simultaneously, thus providing an increased frame rate while not degrading the image quality. Through this study we came up with a mathematical model that allows modelling the ultrasound wave propagation inside soft tissues. This model was used to review a great number of existing ultrasound acquisition schemes and to expose their advantages and drawbacks. We proposed to overcome the image quality / frame rate trade-off by using temporally encoded ultrasound waves emitted simultaneously, and the generated direct model enabled the use of different inverse problem approaches in order to reconstruct the pulse-echo impulse response of the insonified medium and thus its image. Moreover, we further improved the direct model, which allowed us to directly link the backscattered echoes to the position / magnitude of the scatterers inside the imaged medium. The results yielded by the inverse problem approaches based on the former model put us face to face with state of the art method that not only increase the image quality several times in terms resolution and speckle coherence but also provide a boost in frame acquisition rate
Soubies, Emmanuel. "Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4082/document.
This thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
Le, Rousseau Jérôme. "Représentation Microlocale de Solutions de Systèmes Hyperboliques, Application à l'Imagerie, et Contributions au Contrôle et aux Problèmes Inverses pour des Equations Paraboliques." Habilitation à diriger des recherches, Université de Provence - Aix-Marseille I, 2007. http://tel.archives-ouvertes.fr/tel-00201887.
Dans une seconde partie, nous étudions la contrôlabilité aux trajectoires pour des équations paraboliques linéaires et semi-linéaires. Nous nous intéressons plus particulièrement au cas d'opérateurs sous forme divergentielle où le coefficient de la partie principale est non continu. Nous prouvons tout d'abord une inégalité de Carleman, en dimension un d'espace, pour un coefficient $C^1$ par morceaux. Par un passage à la limite dans l'inégalité de Carleman, ce résultat est étendu au cas d'un coefficient $BV$. Avec ces résultats, nous prouvons la contrôlabilité de ces équations paraboliques en dimension un d'espace sans faire d'hypothèse de compatibilité entre la région de contrôle et les signes des sauts du coefficient discontinu. De plus, nous exhibons un cas en dimension supérieure pour lequel la même conclusion est obtenue. Finalement, nous utilisons une inégalité de Carleman afin d'identifier le coefficient discontinu à partir de mesures faites sur la solution.
Labat, Christian. "Algorithmes d'optimisation de critères pénalisés pour la restauration d'images : application à la déconvolution de trains d'impulsions en imagerie ultrasonore." Phd thesis, Ecole centrale de nantes - ECN, 2006. http://tel.archives-ouvertes.fr/tel-00132861.
- Démontrer la convergence des algorithmes SQ approchés et GCNL+SQ1D.
- Etablir des liens forts entre les algorithmes SQ approchés et GCNL+SQ1D.
- Illustrer expérimentalement en déconvolution d'images le fait que les algorithmes SQ approchés et GCNL+SQ1D sont préférables aux algorithmes SQ exacts.
- Appliquer l'approche pénalisée à un problème de déconvolution d'images en contrôle non destructif par ultrasons.
Pereira, Antonio. "Imagerie acoustique en espace clos." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00984347.
Mirabel, Xavier. "La tomographie d'impédance électrique : résolution d'un problème inverse mal-posé avec la méthode des éléments frontières par linéarisations successives." Toulouse, INPT, 1997. http://www.theses.fr/1997INPT017H.
Conil, Frédéric. "Développements instrumentaux et expérimentation en endoscopie sismique." Rennes 1, 2003. http://www.theses.fr/2003REN10164.
Sauvage, Jean-Francois. "Calibrations et méthodes d'inversion en imagerie à haute dynamique pour la détection directe d'exoplanètes." Phd thesis, Université Paris-Diderot - Paris VII, 2007. http://tel.archives-ouvertes.fr/tel-00453100.
Wintz, Timothée. "Super-resolution in wave imaging." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE052/document.
Different modalities in wave imaging each present limitations in terms of resolution or contrast. In this work, we present a mathematical model of the ultrafast ultrasound imaging modality and reconstruction methods which can improve contrast and resolution in ultrasonic imaging. We introduce two methods which allow to improve contrast and to locate blood vessels belowthe diffraction limit while simultaneously estimating the blood velocity. We also present a reconstruction method in electrical impedance tomography which allows reconstruction of microscopic parameters from multi-frequency measurements using the theory of homogenization
Zhu, Sha. "A Bayesian Approach for Inverse Problems in Synthetic Aperture Radar Imaging." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00844748.
Ratsakou, Almpion. "Multi-physical modeling of thermographic inspection methods and fast imaging Fast models dedicated to simulation of eddy current thermography Fast simulation approach dedicated to infrared thermographic inspection of delaminated planar pieces Model based characterisation of delamination by means of thermographic inspection." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS002.
Thermographic inspection is a popular nondestructive testing (NdT) technique that provides images of temperature distribution over large areas at surfaces of tested workpieces. Detecting delaminations between metallic layers is the matter here. Simulation of these inspections indeed helps to complement experimental studies, evaluate performance in terms of detection and support model-based algorithms. A semi-analytical model based on a truncated region eigenfunction expansion for simulation of thermographic inspection is focused onto. The problem is solved in the Laplace domain w.r.t time, and the temperature distribution approximated by expanding it on a tensor product basis. Considered sources are lamps providing thermal excitation but may also be eddy current sources (leading to a coupled electromagnetic and heat problem). The description of the delaminations as thin air gaps between the workpiece layers proves to be equivalent with introduction of a surface resistance to the heat flow, enabling treatment via the applied modal approach without additional discretisation. Complementary computations by industry (Finite Element Method) and in-house (Finite Integration Technique) codes confirm the accuracy of the developments. Then, much attention is put on imaging and detection. A two-step procedure is devised, first denoising of raw signals and detection of any possible defect using a thermographic signal reconstruction leading to high spatial and temporal resolution in the transverse plane, completed by proper edge detection, second an iterative optimization being employed, with results of the first step used for regularization of a least-square scheme to characterize thicknesses and depths. All the above is illustrated by comprehensive numerical simulations in conditions close to practice
Boulier, Thomas. "Modélisation de l'électro-localisation active chez les poissons faiblement électriques." Palaiseau, Ecole polytechnique, 2013. http://www.theses.fr/2013EPXX0108.
El, Kanfoud Ibtissam. "Résolution de problèmes de rayonnement électromagnétique appliqués à l’imagerie médicale avec FreeFEM++." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4000/document.
The use of microwaves for diagnosis is booming in the medical field. One of the latest applications is the detection of strokes by microwave imaging. The company EMTensor GmbH based in Vienna, Austria is currently studying such a system in collaboration with LEAT, the LJAD of the Côte d’Azur University and the LJLL of Sarbonne University, for the diagnosis and control of the treatement efficiency. The purpose of this work is to model the brain imaging measurement system developed by EMTensor GmbH. It is a transmission/ reception system consisting of 160 antennas arranged in 5 rings of 32 antennas distributed on a cylinder metal tank of semi-open circular section. One of the major issues of this work is the modeling and electromagnetic simulation (EM) of the complete system including a realistic brain model. The difficulty lies both in the size of the EM problem to be simulated beacause of the relationship between the considerable size of the system and the the very small size of certain inhomogeneities within the brain, and the great heterogeneity of the dielectric permittivities present inside the brain. We decided to use an open source software, FreeFem++ for this modelling because it is well adapted to high performance computing through domain decomposition methods, which is mandatory for the complexity of the EM problem. First, we compared the simulation results of the vacuum matching measurement system (without the brain) to the measurements and the results obtained by the FEM-based EM HFSS simulation software to those obtained by FreeFem++. We then simulated a virtual threedimensional head model, from brain imaging system cuts (CT scan and MRI), in partnership with EMTensor, looking for the position and type of stroke (ischemic and hemorragic). The influence of the measurement noise, the value of the adaptation gel used, the coupling between the sensors and the coupling between the head and the sensors are also studied. In order to validate these models, two simple cases have been studied. A large tube and a small plastic tube are fielld with adaptation liquid with the dielectric characteristic of a brain to find the shape of the tubes used by qualitative imaging. Finally, with the MEDIMAX project partners and the EMTensor company we applied a quantitative method to the detection of ischemic stroke by the microwave tomography. The direct problem has been solved with the help of FreeFem++, using hight order elements and parallel preconditioners for the domain decomposition method. We solved the inverse problem by a minimization algorithm, in order to reconstruct tomographic images of the brain in times compatible with medical imperatives defined by clinicians.”
Le, Magueresse Thibaut. "Approche unifiée multidimensionnelle du problème d'identification acoustique inverse." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI010.
Experimental characterization of acoustic sources is one of the essential steps for reducing noise produced by industrial machinery. The aim of the thesis is to develop a complete procedure to localize and quantify both stationary and non-stationary sound sources radiating on a surface mesh by the back-propagation of a pressure field measured by a microphone array. The inverse problem is difficult to solve because it is generally ill-conditioned and subject to many sources of error. In this context, it is crucial to rely on a realistic description of the direct sound propagation model. In the frequency domain, the equivalent source method has been adapted to the acoustic imaging problem in order to estimate the transfer functions between the source and the antenna, taking into account the wave scattering. In the time domain, the propagation is modeled as a convolution product between the source and an impulse response described in the time-wavenumber domain. It seemed appropriate to use a Bayesian approach to use all the available knowledge about sources to solve this problem. A priori information available about the acoustic sources have been equated and it has been shown that taking into account their spatial sparsity or their omnidirectional radiation could significantly improve the results. In the assumptions made, the inverse problem solution is written in the regularized Tikhonov form. The regularization parameter has been estimated by an empirical Bayesian approach. Its superiority over methods commonly used in the literature has been demonstrated through numerical and experimental studies. In the presence of high variability of the signal to noise ratio over time, it has been shown that it is necessary to update its value to obtain a satisfactory solution. Finally, the introduction of a missing variable to the problem reflecting the partial ignorance of the propagation model could improve, under certain conditions, the estimation of the complex amplitude of the sources in the presence of model errors. The proposed developments have been applied to the estimation of the sound power emitted by an automotive power train using the Bayesian focusing method in the framework of the Ecobex project. The cyclo-stationary acoustic field generated by a fan motor was finally analyzed by the real-time near-field acoustic holography method