Teses / dissertações sobre o tema "Reconstruction d’image"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 41 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Reconstruction d’image".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Takam, tchendjou Ghislain. "Contrôle des performances et conciliation d’erreurs dans les décodeurs d’image". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT107/document.
Texto completo da fonteThis thesis deals with the development and implementation of error detection and correction algorithms in images, in order to control the quality of produced images at the output of digital decoders. To achieve the objectives of this work, we first study the state-of the-art of the existing approaches. Examination of classically used approaches justified the study of a set of objective methods for evaluating the visual quality of images, based on machine learning methods. These algorithms take as inputs a set of characteristics or metrics extracted from the images. Depending on the characteristics extracted from the images, and the availability or not of a reference image, two kinds of objective evaluation methods have been developed: the first based on full reference metrics, and the second based on no-reference metrics; both of them with non-specific distortions. In addition to these objective evaluation methods, a method of evaluating and improving the quality of the images based on the detection and correction of the defective pixels in the images has been implemented. The proposed results have contributed to refining visual image quality assessment methods as well as the construction of objective algorithms for detecting and correcting defective pixels compared to the various currently used methods. An implementation on an FPGA has been carried out to integrate the models with the best performances during the simulation phase
Piffet, Loïc. "Décomposition d’image par modèles variationnels : débruitage et extraction de texture". Thesis, Orléans, 2010. http://www.theses.fr/2010ORLE2053/document.
Texto completo da fonteThis thesis is devoted in a first part to the elaboration of a second order variational modelfor image denoising, using the BV 2 space of bounded hessian functions. We here take a leaf out of the well known Rudin, Osher and Fatemi (ROF) model, where we replace the minimization of the total variation of the function with the minimization of the second order total variation of the function, that is to say the total variation of its partial derivatives. The goal is to get a competitive model with no staircasing effect that generates the ROF model anymore. The model we study seems to be efficient, but generates a blurry effect. In order to deal with it, we introduce a mixed model that permits to get solutions with no staircasing and without blurry effect on details. In a second part, we take an interset to the texture extraction problem. A model known as one of the most efficient is the T V -L1 model. It just consits in replacing the L2 norm of the fitting data term with the L1 norm.We propose here an original way to solve this problem by the use of augmented Lagrangian methods. For the same reason than for the denoising case, we also take an interest to the T V 2-L1 model, replacing again the total variation of the function by the second order total variation. A mixed model for texture extraction is finally briefly introduced. This manuscript ends with a huge chapter of numerical tests
Wang, Zhihan. "Reconstruction des images médicales de tomodensitométrie spectrale par apprentissage profond". Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0124.
Texto completo da fonteComputed tomography (CT), a cornerstone of diagnostic imaging, focuses on two contemporary topics: radiation dose reduction and multi-energy imaging, which are inherently interconnected. As an emerging advancement, spectral CT can capture data across a range of X-ray energies for bettermaterial differentiation, reducing the need for repeat scans and thereby lowering overall radiationexposure. However, the reduced photon count in each energy bin makes traditional reconstruction methods susceptible to noise. Therefore, deep learning (DL) techniques, which have shown great promise in medical imaging, are being considered. This thesis introduces a novel regularizationterm that incorporates convolutional neural networks (CNNs) to connect energy bins to a latent variable, leveraging all binned data for synergistic reconstruction. As a proof-of concept, we propose Uconnect and its variant MHUconnect, employing U-Nets and the multi-head U-Net, respectively, as the CNNs, with images at a specific energy bin serving as the latent variable for supervised learning.The two methods are validated to outperform several existing approaches in reconstruction and denoising tasks
Edjlali, Ehsan. "Fluorescence diffuse optical tomographic iterative image reconstruction for small animal molecular imaging with continuous-wave near infrared light". Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10673.
Texto completo da fonteAbstract : The simplified spherical harmonics (SPN) approximation to the radiative transfer equation has been proposed as a reliable model of light propagation in biological tissues. However, few analytical solutions have been found for this model. Such analytical solutions are of great value to validate numerical solutions of the SPN equations, which must be resorted to when dealing with media with complex curved geometries. In the first part of this thesis, analytical solutions for two curved geometries are presented for the first time, namely for the sphere and for the cylinder. For both solutions, the general refractiveindex mismatch boundary conditions, as applicable in biomedical optics, are resorted to. These solutions are validated using mesh-based Monte Carlo simulations. So validated, these solutions allow in turn to rapidly validate numerical code, based for example on finite differences or on finite elements, without requiring lengthy Monte Carlo simulations. provide reliable tool for validating numerical simulations. In the second part, iterative reconstruction for fluorescence diffuse optical tomography imaging is proposed based on an Lq-Lp framework for formulating an objective function and its regularization term. To solve the imaging inverse problem, the discretization of the light propagation model is performed using the finite difference method. The framework is used along with a multigrid mesh on a digital mouse model. The inverse problem is solved iteratively using an optimization method. For this, the gradient of the cost function with respect to the fluorescent agent’s concentration map is necessary. This is calculated using an adjoint method. Quantitative metrics resorted to in medical imaging are used to evaluate the performance of the framework under different conditions. The results obtained support this new approach based on an Lq-Lp formulation of cost functions in order to solve the inverse fluorescence problem with high quantified performance.
Bussy, Victor. "Integration of a priori data to optimise industrial X-ray tomographic reconstruction". Electronic Thesis or Diss., Lyon, INSA, 2024. http://www.theses.fr/2024ISAL0116.
Texto completo da fonteThis thesis explores research topics in the field of industrial non-destructive testing (NDT) using X-rays. The application of CT tomography has significantly expanded, and its use has intensified across many industrial sectors. Due to increasing demands and constraints on inspection processes, CT must continually evolve and adapt. Whether in terms of reconstruction quality or inspection time, X-ray tomography is constantly progressing, particularly in the so-called sparse-view strategy. This strategy involves reconstructing an object using the minimum possible number of radiographic projections while maintaining satisfactory reconstruction quality. This approach reduces acquisition times and associated costs. Sparse-view reconstruction poses a significant challenge as the tomographic problem is ill-conditioned, or, as it is often described, ill-posed. Numerous techniques have been developed to overcome this obstacle, many of which rely on leveraging prior information during the reconstruction process. By exploiting data and knowledge available before the experiment, it is possible to improve reconstruction results despite the reduced number of projections. In our industrial context, for example, the computer-aided design (CAD) model of the object is often available, which provides valuable information about the geometry of the object under study. However, it is important to note that the CAD model only offers an approximate representation of the object. In NDT or metrology, it is precisely the differences between an object and its CAD model that are of interest. Therefore, integrating prior information is complex, as this information is often "approximate" and cannot be used as is. Instead, we propose to judiciously use the geometric information available from the CAD model at each step of the process. We do not propose a single method but rather a methodology for integrating prior geometric information during X-ray tomographic reconstruction
Tong, Xiao. "Co-registration of fluorescence diffuse optical tomography (fDOT) with Positron emission tomography (PET) and development of multi-angle fDOT". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112251/document.
Texto completo da fonteThis thesis concerns the image processing of fluorescence diffuse optical tomography (fDOT), following two axes: FDOT image co-registration with PET (positron emission tomography) image and improvement of fDOT image reconstructions using mirrors to collect additional projections. It is presented in two parts:In the first part, an automatic method to co-register the fDOT images with PET images has been developed to correlate all the information from each modality. This co-registration method is based on automatic detection of fiducial markers (FM) present in both modalities. The particularity of this method is the use of optical surface image obtained in fDOT imaging system, which serves to identify the Z position of FM in optical images. We tested this method on a model of mice bearing tumor xenografts of MEN2A cancer cells that mimic a human medullary thyroid carcinoma, after a double injection of radiotracer [18F] 2-fluoro-2-Deoxy-D-glucose ( FDG) for PET imaging and optical fluorescent infrared tracer Sentidye. With the accuracy of our method, we can demonstrate that the signal of Sentidye is present both in the tumor and surrounding vessels.The fDOT reconstruction image quality is degraded along the Z axis due to a limited number of projections for reconstruction. In the second part, the work is oriented towards a new method of fDOT image reconstruction with a new multi-angle data acquisition system in placing two mirrors on each side of the animal. This work was conducted in collaboration with the CS Department of University College London (UCL), a partner of the European project FMT-XCT. TOAST software developed by this team was used as source code for the reconstruction algorithm, and was modified to adapt to the concerned problem. After several tests on the adjustment of program parameters, we applied this method on a phantom that simulating the biological tissue and on mice. The results showed an improvement in the reconstructed image of a semi-cylindrical phantom and the image of mouse kidney, for which the reconstruction of the mirrors geometry is better than that of conventional geometry without mirror. Nevertheless, we observed that the results were very sensitive to certain parameters, where the performance of reconstruction varies from one case to another. Future prospectives concern the optimization of parameters in order to generalize the multi-angle approach
Boudjenouia, Fouad. "Restauration d’images avec critères orientés qualité". Thesis, Orléans, 2017. http://www.theses.fr/2017ORLE2031/document.
Texto completo da fonteThis thesis concerns the blind restoration of images (formulated as an ill-posed and illconditioned inverse problem), considering a SIMO system. Thus, a blind system identification technique in which the order of the channel is unknown (overestimated) is introduced. Firstly, a simplified version at reduced cost SCR of the cross relation (CR) method is introduced. Secondly, a robust version R-SCR based on the search for a sparse solution minimizing the CR cost function is proposed. Image restoration is then achieved by a new approach (inspired from 1D signal decoding techniques and extended here to the case of 2D images) based on an efficient tree search (Stack algorithm). Several improvements to the ‘Stack’ method have been introduced in order to reduce its complexity and to improve the restoration quality when the images are noisy. This is done using a regularization technique and an all-at-once optimization approach based on the gradient descent which refines the estimated image and improves the algorithm’s convergence towards the optimal solution. Then, image quality measurements are used as cost functions (integrated in the global criterion), in order to study their potential for improving restoration performance. In the context where the image of interest is corrupted by other interfering images, its restoration requires the use of blind sources separation techniques. In this sense, a comparative study of some separation techniques based on the property of second-order decorrelation and sparsity is performed
Madec, Morgan. "Conception, simulation et réalisation d’un processeur optoélectronique pour la reconstruction d’images médicales". Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/MADEC_Morgan_2006.pdf.
Texto completo da fonteOptical processing can be used to speed up some algorithms of image reconstruction from tomodensitometric data provided by volume exploration systems. This may be of high interest in order to meet the needs of future assisted therapy systems. Two systems are described in this document, corresponding to the two main steps of the above mentioned algorithms: a filtering processor and a backprojection processor. They are first considered under a material point of view. Whatever function it may compute, an optical processor is made up of light sources, displays and cameras. Present state-of-the-art devices highlight a weakness in display performances. Special attention has been focused on ferroelectric liquid crystal spatial light modulators (modelling, simulations, and characterizations of commercial solutions). The potential of optical architectures is compared with electronic solutions, considering computation power and processed image quality. This study has been carried out for both systems first in simulation, with a reliable model of the architecture, and then with an experimental prototype. The optical filtering processor does not give accurate results: the signal to noise ratio on the reconstructed image is about 20 dB in simulation (the model used does not take into account the majority of geometrical distortions) and experimental measurements show strong limitation, especially when considering the problem of image formation with coherent lighting (speckle). On the other hand, results obtained with the optical backprojection processor are most encouraging. The model, more complete and accurate than the filtering processor, as well as the simulations, shows that processed image quality can be virtually equivalent to the one obtained by digital means (signal to noise ratio is over 50 dB) with two order of magnitude speed-up. Results obtained with the experimental prototype are in accordance with simulations and confirm the potential held by the architecture. As an extension, a hybrid processor involving the backprojection processor for the computation of more complex reconstruction algorithms, e. G. ASSR for helical CT-scan, is proposed in the last part of the document
Courchay, Jérôme. "Calibration par programmation linéaire et reconstruction spatio-temporelle à partir de réseaux d’images". Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1014/document.
Texto completo da fonteThe issue of retrieving a 3D shape from a static scene captured with multiple view point calibrated cameras has been deeply studied these last decades. Results presented in the stereovision benchmark made by Strecha et al., show the high quality of state of the art methods. Particularly, works from IMAGINE laboratory lead to impressive results. So, it becomes convenient to calibrate wider and wider scenes, in order to apply these stereovision algorithms to large scale scenes. Three main objectives appear : – The calibration accuracy should be improved. As stated by Yasutaka Furukawa, even stereovision benchmarks use noisy cameras. So one obvious way to improve stereovision, is to improve camera calibration. – It is crucial to take cycles into account in cameras graph in a global way. Most of nowadays methods are sequential and so present a drift. So these methods do not offer the guarantee to retrieve the loopy configuration for a loop made of a high number of images, but retrieve a spiral configuration. As we aim to calibrate wider and wider cameras networks, this point becomes quite crucial. – To calibrate wide cameras networks, having quick and linear algorithms can be necessary. Calibration methods we propose in the first part, allow to calibrate with an accuracy close to state of the art. Moreover, we take cyclicity constraints into account in a global way, with linear optimisations under linear constraints. So these methods allow to take cycle into account and benefit from quickness of linear programming. Finally, sterovision being a well studied topic, it is convenient to concentrate on the next step, that is, spatio-temporal reconstruction. The IMAGINE' stereovision method being the state of the art, it is interesting to extend this method to spatio-temporal reconstruction, that is, dynamique scene reconstruction captured from a dome of cameras
Guénard, Jérôme. "Synthèse de modèles de plantes et reconstructions de baies à partir d’images". Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0101/document.
Texto completo da fontePlants are essential elements of our world. Thus, 3D plant models are necessary to create realistic virtual environments. Mature computer vision techniques allow the reconstruction of 3D objects from images. However, due to the complexity of the topology of plants, dedicated methods for generating 3D plant models must be devised. This thesis is divided into two parts. The first part focuses on the modeling of biologically realistic plants from a single image. We propose to generate a 3D model of a plant, using an analysis-by-synthesis method considering both a priori information of the plant species and a single image. First, a dedicated 2D skeletonisation algorithm generates possible branching structures from the foliage segmentation. Then, we built a 3D generative model based on a parametric model of branching systems taking into account botanical knowledge. The resulting skeleton follows the hierarchical organisation of natural branching structures. Varying parameter values of the generative model (main branching structure of the plant and foliage), we produce a series of candidate models. A Bayesian model optimizes a posterior criterion which is composed of a likelihood function which measures the similarity between the image and the reprojected 3D model and a prior probability measuring the realism of the model. After modeling plant models branching systems and foliage, we propose to model the fruits. As we mainly worked on vines, we propose a method for reconstructing a vine grape from at least two views. Each bay is considered to be an ellipsoid of revolution. The resulting method can be adapted to any type of fruits with a shape similar to a quadric of revolution. The second part of this thesis focuses on the reconstruction of quadrics of revolution from one or several views. Reconstruction of quadrics, and in general, 3D surface reconstruction is a very classical problem in computer vision. First, we recall the necessary background in projective geometry quadrics and computer vision and present existing methods for the reconstruction of quadrics or more generally quadratic surfaces. A first algorithm identifies the images of the principal foci of a quadric of revolution from a "calibrated" view (that is, the intrinsic parameters of the camera are given). Then we show how to use this result to reconstruct, from a linear triangulation scheme, any type of quadrics of revolution from at least two views. Finally, we show that we can derive the 3D pose of a given quadric of revolution from a single occluding contour. We evaluate the performance of our methods and show some possible applications
Regnier, Rémi. "Approche de reconstruction d’images fondée sur l’inversion de certaines transformations de Radon généralisées". Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0698/document.
Texto completo da fonteSince the invention of radiography at the beginning of the 20th century and of the radar during the 2nd world war, the need of information on our environment is ever increasing. This goes from the exploration of internal structures using non-invasive numerous imaging techniques to satellite imaging which rapidly expands with space exploration. A huge number of imaging systems have been conceived to provide faithful images of the objects of interest. Computed Tomography (or the medical scanner) has experienced a tremendous success since it was invented. The reason for this success lies in the fact that its mathematical foundation is the Radon transform (RT), which has an inverse formula allowing the faithful reconstruction of the interior of an object.The Radon transform is a geometric integral transform which integrates a physical density of interest along a straight line in the plane. It is natural to expect that, when the line is replaced by a curve or a surface as an integration support, new imaging processes may emerge. In this thesis, we study two generalized Radon transforms which are defined on broken lines in the form of a letter V (called V-line RT or VRT) and on spheres centered on a fixed plane (called spherical RT or SRT), as well as their resulting imaging processes.The Radon transforms on V-lines (VRT) form the mathematical foundation of three tomographic modalities. The first modality exploits not only the attenuation of X-rays in traversed matter (as in Computed Tomography) but also the phenomenon of reflection on an impenetrable surface. The second modality makes use of Compton scattering for emission imaging. The third modality combines transmission and emission imaging modalities into a bimodal imaging system from scattered ionizing radiation. This study puts forward new imaging systems which compete with the existing ones and develops new algorithms for attenuation corrections (in emission imaging the attenuation is one of factors degrading seriously tomographic image quality up to now).The Radon transform on spheres centered on a fixed plane (SRT) is a generalization of the classical Radon transform in three dimensions. It has been proposed as a mathematical model for Synthetic Aperture Radar (SAR) imaging. We show through the setting up of appropriate algorithms that the inversion of the SRT yields an efficient solution to the landscape reconstruction problem, directly in three dimensions.The theoretical feasibility of these new imaging systems based on generalized Radon transforms and the good performance of inversion algorithms based on inversion formulas open the way to several perspectives: 3D extension of bimodal imaging by scattered radiation or SAR target motion detection through the introduction of other generalized Radon transforms. Moreover the algorithmic methods developed here may serve in other imaging activities such as: seismics with the parabolic Radon transform, Doppler radar with the hyperbolic Radon transform, thermo-opto-acoustic imaging with the Radon transform on circles centered on a fixed circle
Ait, Assou Manal. "Synthetic aperture imaging and spectroscopy in the terahertz range using time domain spectroscopy system". Electronic Thesis or Diss., Limoges, 2024. https://aurore.unilim.fr/theses/nxfile/default/437c1676-13e9-4b65-9ff5-95b93ac02ca3/blobholder:0/2024LIMO0008.pdf.
Texto completo da fonteLes techniques d'imagerie et de spectroscopie térahertz offrent de vastes applications dans le control non destructif ou le contrôle de qualité dans la manufacture industrielle, la pharmaceutique et la biologie, l'archéologie ou encore le monde de l’art. Pour ces applications, la technique de spectroscopie térahertz dans le domaine temporel (THz-TDS) permet une analyse sur une bande passante instantanée très large (0.1-6 THz), mais nécessite généralement de déplacer mécaniquement l’échantillon à imager dans le plan focal du faisceau THz. Le travail de cette thèse porte sur l’adaptation d’un banc THz-TDS pour l’imagerie et la spectroscopie des échantillons fixes, en se basant sur le principe d’un radar à synthèse d’ouverture (SAR), en transmission. En utilisant cette technique, on démontre une reconstruction d'image en 3D avec une résolution inférieure au millimètre de plusieurs échantillons différents. Pour remédier au temps d'acquisition prolongés, un échantillonnage spatial lacunaire est proposé, réduisant les éléments du réseau synthétique et améliorant la vitesse d'acquisition. De plus, les données reconstruites ne sont pas uniquement utilisées pour l'imagerie mais permettent également la caractérisation des paramètres optiques matériaux (l'indice de réfraction et le coefficient d'absorption) constituant l'objet imagé dans la bande de fréquence de reconstruction. Ainsi, la technique proposée permet la cartographie spectrale 2D de l'indice de réfraction à diverses fréquences térahertz. Enfin, la méthodologie proposée est appliquée à l'imagerie de sortie de guide d'ondes térahertz, illustrant sa grande flexibilité et ses vastes domaine potentielles d’utilisation
Aknoun, Sherazade. "Analyse quantitative d’images de phase obtenues par interféromètrie à décalage quadri-latéral. Applications en biologie". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4358/document.
Texto completo da fonteThe aim of this thesis, dedicated to the study and quantitative analysis of phase images obtained thanks to quadri-wave lateral shearing interferometry, is to caracterize a metrological tool and its three proposed different applications.This work has been done in collaboration between Institut Fresnel (Marseille, France) and Phasics company (Palaiseau, France) and continues that of Pierre Bon who has been in charge the application this technique to microscopy. This interferometric technique, developped by Phasics, for optical metrology and lasers characterization, allows to record complex eletromagnetic field maps thanks to a wave front measurement. By using it in the microscope image plane, one can obtain inetnsity and optical path difference images of a semi-transparent biological sample. this technique is now considered as a new quantitative phase contrast technique.The first part of this manuscript will be a state of the art of quantitative microscopy techniques. The issues of quantification and its meanings in the framework of different fluorescent and phase based techniques will be discussed.A description of the technique that is used and its comparison with similar phase techniques will be done.The measurement, under the projective approximation, is studied leading to different variables. We show different applications concerning isotropic elements in a first part and anisotropic elements in the second one.We show how this measurement is trnasposed to the third dimensions allowing three dimensional imaging and complete reconstruction of refractive index maps of biological samples
Brahim, Naouraz. "Reconstruction tridimensionnelle de scènes sous-marines à partir de séquences d’images acquises par des caméras acoustiques". Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30470/30470.pdf.
Texto completo da fonteAccording to recent studies, climate change is having a significant impact on our marine environment inducing temperature increases, chemistry changes, ocean circulation influencing both population dynamics and underwater structure stability. Environmental change is thus a growing scientific concern requiring a regular monitoring of the evolution of underwater ecosystems with appropriate studies combined with accurate and relevant detailed information extraction and preservation. Tracking and modeling such changes in a marine environment is one of the current challenges for underwater exploration. The most common technique used to observe underwater environment, relies on vision-based systems either acoustical or optical. Optical cameras are widely used for acquiring images of the seafloor/underwater structures as they can provide information about the physical properties of the image that will enable the description of the observed scene (color, reflection, geometry). However, the range limitation and non-ideal underwater conditions (dark and turbid waters) make acoustic imaging the most reliable means of sight inside the underwater environment. Traditional sonar systems cannot provide an acoustic image sequences like optical cameras. To overcome those drawbacks, acoustic camera was built. They can produce real time high resolution underwater image sequences, with high refresh rate. Moreover, compared to optical devices, they can acquire acoustic images in turbid, deep and dark water making acoustic camera imaging a reliable means for observing underwater environment. However, although acoustic cameras can provide 2-D resolution of the order of centimeters, they do not resolve the altitude of observed scene. Thus they offer a 2D environment representation which provides incomplete information about the underwater environment. Hence, it would be very interesting to have a system which can provide height information as well as a high resolution. This is the purpose of this thesis where we developed a methodology that enables 3D reconstruction of underwater scenes using sequences of acoustic images. The proposed methodology is inspired from stereovision techniques that allow 3D information computation from image sequences. It consists of two main steps. In the first step, we propose an approach that enables the extraction of relevant salient points from several images. In the second step, two different methods have been proposed (curvilinear approach and volumetric approach) in order to reconstruct the observed scene using images acquired from different viewpoints. The Covariance Matrix Adaptation Evolution Strategy algorithm (SE-AMC) has been used to compute camera movement between images. This movement has been then used to retrieve 3D information. The methodology performances have been evaluated: feature extraction approach has been assessed using criteria of good detection, repeatability and good localization and 3D reconstruction approach has been assessed by comparison between estimated camera movement and 3D information with real data.
Merasli, Alexandre. "Reconstruction d’images TEP par des méthodes d’optimisation hybrides utilisant un réseau de neurones non supervisé et de l'information anatomique". Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU1003.
Texto completo da fontePET is a functional imaging modality used in oncology to obtain a quantitative image of the distribution of a radiotracer injected into a patient. The raw PET data are characterized by a high level of noise and modest spatial resolution, compared to anatomical imaging modalities such as MRI or CT. In addition, standard methods for image reconstruction from the PET raw data introduce a positive bias in low activity regions, especially when dealing with low statistics acquisitions (highly noisy data). In this work, a new reconstruction algorithm, called DNA, has been developed. Using the ADMM algorithm, DNA combines the recently proposed Deep Image Prior (DIP) method to limit noise propagation and improve spatial resolution by using anatomical information, and a bias reduction method developed for low statistics PET imaging. However, the use of DIP and ADMM algorithms requires the tuning of many hyperparameters, which are often selected manually. A study has been carried out to tune some of them automatically, using methods that could benefit other algorithms. Finally, the use of anatomical information, especially with DIP, allows an improvement of the PET image quality, but can generate artifacts when information from one modality does not spatially match with the other. This is particularly the case when tumors have different anatomical and functional contours. Two methods have been developed to remove these artifacts while trying to preserve the useful information provided by the anatomical modality
Gilardet, Mathieu. "Étude d’algorithmes de restauration d’images sismiques par optimisation de forme non linéaire et application à la reconstruction sédimentaire". Thesis, Pau, 2013. http://www.theses.fr/2013PAUU3040/document.
Texto completo da fonteWe present a new method for seismic image restoration. When observed, a seismic image is the result of an initial deposit system that has been transformed by a set of successive geological deformations (folding, fault slip, etc) that occurred over a large period of time. The goal of seismic restoration consists in inverting the deformations to provide a resulting image that depicts the geological deposit system as it was in a previous state. With our contribution, providing a tool that quickly generates restored images helps the geophysicists to recognize geological features that may be too strongly altered in the observed image. The proposed approach is based on a minimization process that expresses geological deformations in terms of geometrical constraints. We use a quickly-converging Gauss-Newton approach to solve the system. We provide results to illustrate the seismic image restoration process on real data and present how the restored version can be used in a geological interpretation framework
Daisy, Maxime. "Inpainting basé motif d’images et de vidéos appliqué aux données stéréoscopiques avec carte de profondeur". Caen, 2015. https://hal.archives-ouvertes.fr/tel-01257756.
Texto completo da fonteWe focus on the study and the enhancement of greedy pattern-based image processing algorithms for the specific purpose of inpainting, i. E. , the automatic completion of missing data in digital images and videos. We first review the state of the art methods in this field and analyze the important steps of prominent greedy algorithms in the literature. Then, we propose a set of changes that significantly enhance the global geometric coherence of images reconstructed with this kind of algorithms. We also focus on the reduction of the visual bloc artifacts classically appearing in the reconstruction results. For this purpose, we define a tensor-inspired formalism for fast anisotropic patch blending, guided by the geometry of the local image structures and by the automatic detection of the artifact locations. We illustrate the improvement of the visual quality brought by our contributions with many examples, and show that we are generic enough to perform similar adaptations to other existing pattern-based inpainting algorithms. Finally, we extend and apply our reconstruction algorithms to stereoscopic image and video data, synthesized with respect to new virtual camera viewpoints. We incorporate the estimated depth information (available from the original stereo pairs) in our inpainting and patch blending formalisms to propose a visually satisfactory solution to the non-trivial problem of automatic disocclusion of real resynthesized stereoscopic scenes
Durix, Bastien. "Squelettes pour la reconstruction 3D : de l'estimation de la projection du squelette dans une image 2D à la triangulation du squelette en 3D". Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19589/1/durix.pdf.
Texto completo da fonteZhang, Mo. "Vers une méthode de restauration aveugle d’images hyperspectrales". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S132.
Texto completo da fonteWe propose in this thesis manuscript to develop a blind restoration method of single component blurred and noisy images where no prior knowledge is required. This manuscript is composed of three chapters: the first chapter focuses on state-of-art works. The optimization approaches for resolving the restoration problem are discussed first. Then, the main methods of restoration, so-called semi-blind ones because requiring a minimum of a priori knowledge are analysed. Five of these methods are selected for evaluation. The second chapter is devoted to comparing the performance of the methods selected in the previous chapter. The main objective criteria for evaluating the quality of the restored images are presented. Of these criteria, the l1 norm for the estimation error is selected. The comparative study conducted on a database of monochromatic images, artificially degraded by two blurred functions with different support size and three levels of noise, revealed the most two relevant methods. The first one is based on a single-scale alternating approach where both the PSF and the image are estimated alternatively. The second one uses a multi-scale hybrid approach, which consists first of alternatingly estimating the PSF and a latent image, then in a sequential next step, restoring the image. In the comparative study performed, the benefit goes to the latter. The performance of both these methods will be used as references to then compare the newly designed method. The third chapter deals with the developed method. We have sought to make the hybrid approach retained in the previous chapter as blind as possible while improving the quality of estimation of both the PSF and the restored image. The contributions covers a number of points. A first series concerns the redefinition of the scales that of the initialization of the latent image at each scale level, the evolution of the parameters for the selection of the relevant contours supporting the estimation of the PSF and finally the definition of a blind stop criterion. A second series of contributions concentrates on the blind estimation of the two regularization parameters involved in order to avoid having to fix them empirically. Each parameter is associated with a separate cost function either for the PSF estimation or for the estimation of a latent image. In the sequential step that follows, we refine the estimation of the support of the PSF estimated in the previous alternated step, before exploiting it in the process of restoring the image. At this level, the only a priori knowledge necessary is a higher bound of the support of the PSF. The different evaluations performed on monochromatic and hyperspectral images artificially degraded by several motion-type blurs with different support sizes, show a clear improvement in the quality of restoration obtained by the newly designed method in comparison to the best two state-of-the-art methods retained
Ding, Yanzheng. "Une analyse d’images pour l'identification microstructurale en 3D d’un kaolin saturé sous chargement mécanique". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0051.
Texto completo da fonteAbstractMicrostructure investigation is essential for a better understanding of the mechanical behaviour and volumetric deformation mechanisms of remolded and saturated clays. The goal of this thesis is to identify in 3D the local mechanisms which can be activated at the microstructural level in relation to the mechanical loading of clayey meida. The mechanical behaviour of Kaolin k13 is firstly studied at on two loading paths - oedometric and isotropic. Then, an observation protocol was established for the acquisition of three-dimensional images using Scanning Electron Microscopy (SEM) coupled with Focused Ion Beam (FIB). The reconstruction of the images obtained by FIB-SEM allows us to study the 3D geometry of a sub-volume of the sample. The second part consists of developing a quantitative analysis approach in 3D to identify the microstructure properties on different loading paths. The pore morphology is studied using parameters such as flatness, elongation, and sphericity. The orientation of the pores and particles was first identified on 2D images representing cross-sections in the sample and extended to 3D throughout the entire volume for both loading paths. The results obtained in this thesis highlight the contribution of 3D images for a better understanding of the microstructure of saturated remolded clays
Zein, Sara. "Simulations Monte Carlo des effets des photons de 250 keV sur un fantôme 3D réaliste de mitochondrie et évaluation des effets des nanoparticules d'or sur les caractéristiques des irradiations". Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC036/document.
Texto completo da fonteIn the field of radiobiology, damage to nuclear DNA is extensively studied since it is considered as a sensitive target inside cells. Mitochondria are starting to get some attention as sensitive targets as well since they control many functions important to the cell’s survival. They are double membraned organelles mainly in charge of energy production as well as reactive oxygen species regulation, cell signaling and apoptosis control. Some experiments have shown that after exposure to ionizing radiation the mitochondrial contents are altered and their functions are affected. That is why we are interested in studying the effects of ionizing radiation on mitochondria. At the microscopic scale, Monte Carlo simulations are helpful in reproducing the tracks of ionizing particles for a close study. Therefore, we produced 3D phantoms of mitochondria starting from microscopic images of fibroblast cells. These phantoms are easily uploaded into Geant4 as tessellated and tetrahedral meshes filled with water representing the realistic geometry of these organelles. Microdosimetric analysis is performed to deposited energy by 250keV photons inside these phantoms. The Geant4-DNA electromagnetic processes are used to simulate the tracking of the produced secondary electrons. Since clustered damages are harder to repair by cells, a clustering algorithm is used to study the spatial clustering of potential radiation damages. In radiotherapy, it is a challenge to deliver an efficient dose to the tumor sites without affecting healthy surrounding tissues. The use of gold nanoparticles as radio-sensitizers seems to be promising. Their high photon absorption coefficient compared to tissues deposit a larger dose when they are preferentially absorbed in tumors. Since gold has a high atomic number, Auger electrons are produced abundantly. These electrons have lower range than photoelectrons enabling them to deposit most of their energy near the nanoparticle and thus increasing the local dose. We studied the radio-sensitizing effect of gold nanoparticles on the mitochondria phantom. The effectiveness of this method is dependent on the number, size and spatial distribution of gold nanoparticles. After exposure to ionizing radiation, reactive oxygen species are produced in the biological material that contains abundant amount of water. In this study, we simulate the chemical species produced inside the mitochondria phantom and their clustering is estimated. We take advantage of the Geant4-DNA chemistry processes libraries that is recently included in the Geant4.10.1 release to simulate the spatial distribution of the chemicals and their evolution with time
Gimenez, Lucile. "Outils numériques pour la reconstruction et l'analyse sémantique de représentations graphiques de bâtiments". Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080047.
Texto completo da fonteMany buildings have to undergo major renovation to comply with regulations and environmental challenges. The BIM (Building Information Modeling) helps designers to make better-informed decisions, and results in more optimal energy-efficient designs. Such advanced design approaches require 3D digital models. However such models are not available for existing buildings. The aim of our work is to develop a method to generate 3D building models from existing buildings at low cost and in a reasonable time. We have chosen to work with 2D scanned plans. We assume that it is possible to find a paper plan for most buildings even if it is not always up-to-date and if the recognition quality is also dependent to the plan. The automatic reconstruction of a BIM from a paper plan is based on the extraction and identification of 3 main components: geometry (element shape), topology (links between elements) and semantics (object properties). During this process, some errors are generated which cannot be automatically corrected. This is why, we propose a novel approach based on punctual and guided human interventions to automatically identify and propose correction choices to the user to avoid error propagation.We describe the developed methodology to convert semi-automatically a 2D scanned plan into a BIM. A result analysis is done on 90 images. The following works is focused on the process genericity to test its robustness, the challenge of moving to scale and the multi-level management. The results highlight the pertinence of the error classification, identification and choices made to the user. The process is flexible in order to be completed by others data sources
Tlig, Ghassen. "Programmation mathématique en tomographie discrète". Electronic Thesis or Diss., Paris, CNAM, 2013. http://www.theses.fr/2013CNAM0886.
Texto completo da fonteThe tomographic imaging problem deals with reconstructing an objectfrom a data called a projections and collected by illuminating the objectfrom many different directions. A projection means the information derivedfrom the transmitted energies, when an object is illuminated from a particularangle. The solution to the problem of how to reconstruct an object fromits projections dates to 1917 by Radon. The tomographic reconstructingis applicable in many interesting contexts such as nondestructive testing,image processing, electron microscopy, data security, industrial tomographyand material sciences.Discete tomography (DT) deals with the reconstruction of discret objectfrom limited number of projections. The projections are the sums along fewangles of the object to be reconstruct. One of the main problems in DTis the reconstruction of binary matrices from two projections. In general,the reconstruction of binary matrices from a small number of projections isundetermined and the number of solutions can be very large. Moreover, theprojections data and the prior knowledge about the object to reconstructare not sufficient to determine a unique solution. So DT is usually reducedto an optimization problem to select the best solution in a certain sense.In this thesis, we deal with the tomographic reconstruction of binaryand colored images. In particular, research objectives are to derive thecombinatorial optimization techniques in discrete tomography problems
Tlig, Ghassen. "Programmation mathématique en tomographie discrète". Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00957445.
Texto completo da fonteGong, Xing. "Analyse de séries temporelles d’images à moyenne résolution spatiale : reconstruction de profils de LAI, démélangeage : application pour le suivi de la végétation sur des images MODIS". Thesis, Rennes 2, 2015. http://www.theses.fr/2015REN20021/document.
Texto completo da fonteThis PhD dissertation is concerned with time series analysis for medium spatial resolution (MSR) remote sensing images. The main advantage of MSR data is their high temporal rate which allows to monitor land use. However, two main problems arise with such data. First, because of cloud coverage and bad acquisition conditions, the resulting time series are often corrupted and not directly exploitable. Secondly, pixels in medium spatial resolution images are often “mixed” in the sense that the spectral response is a combination of the response of “pure” elements.These two problems are addressed in this PhD. First, we propose a data assimilation technique able to recover consistent time series of Leaf Area Index from corrupted MODIS sequences. To this end, a plant growth model, namely GreenLab, is used as a dynamical constraint. Second, we propose a new and efficient unmixing technique for time series. It is in particular based on the use of “elastic” kernels able to properly compare time series shifted in time or of various lengths.Experimental results are shown both on synthetic and real data and demonstrate the efficiency of the proposed methodologies
Belhi, Abdelhak. "Digital Cultural Heritage Preservation : enrichment and Reconstruction based on Hierarchical Multimodal CNNs and Image Inpainting Approaches". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE2019.
Texto completo da fonteCultural heritage plays an important role in defining the identity of a society. Long-term physical preservation of cultural heritage remains risky and can lead to multiple problems related to destruction and accidental damage. Digital technologies such as photography and 3D scanning provided new alternatives for digital preservation. However, adapting them to the context of cultural heritage is a challenging task. In fact, fully digitizing cultural assets (visually and historically) is only easy when it comes to assets that are in a good physical shape and all their data is at possession (fully annotated). However, in the real-world, many assets suffer from physical degradation and information loss. Usually, to annotate and curate these assets, heritage institutions need the help of art specialists and historians. This process is tedious, involves considerable time and financial resources, and can often be inaccurate. Our work focuses on the cost-effective preservation of cultural heritage through advanced machine learning methods. The aim is to provide a technical framework for the enrichment phase of the cultural heritage digital preservation/curation process. Through this thesis, we propose new methods to improve the process of cultural heritage preservation. Our challenges are mainly related to the annotation and enrichment of cultural objects suffering from missing and incomplete data (annotations and visual data) which is often considered ineffective when performed manually. Thus, we propose approaches based on machine learning and deep learning to tackle these challenges. These approaches consist of the automatic completion of missing cultural data. We mainly focus on two types of missing data: textual data (metadata) and visual data.The first stage is mainly related to the annotation and labeling of cultural objects using deep learning. We have proposed approaches, that take advantage of cultural objects’ visual features as well as partially available textual annotations, to perform an effective classification. (i) the first approach is related to the Hierarchical Classification of Objects to better meet the metadata requirements of each cultural object type and increase the classification algorithm performance. (ii) the second proposed approach is dedicated to the Multimodal Classification of cultural objects where any object can be represented, during classification, with a subset of available metadata in addition to its visual capture. The second stage considers the lack of visual information when dealing with incomplete and damaged cultural objects. In this case, we proposed an approach based on deep learning through generative models and image data clustering to optimize the image completion process of damaged cultural heritage objects. For our experiments, we collected a large database of cultural objects. We chose to use fine-art paintings in our tests and validations as they were the best in terms of annotations quality
Quijano, Sergio. "Contribution à la reconstruction 3D des membres inférieurs reconstruits à partir des radios biplanes pour l’application à la planification et au suivi des chirurgies". Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0032/document.
Texto completo da fonteFor a better understanding and diagnosis of the pathologies affecting the spatialorganization of our skeleton it is necessary to address them in 3D. CT-Scan and MRI areimaging modalities commonly used to study the musculoskeletal system in 3D. Moreover,patients are recorded in reclining position thus gravity effect can’t be taken into account.Furthermore, CT-Scan exposes patient to high radiation doses and MRI is used mostly tocharacterize soft tissues. With the EOS system, from a pair of low dose biplanar radiographs wecan reconstruct bones in 3D, and the radiographs are recorded in standing position thus gravityeffects are considered. This thesis contributes to the improvement of the 3D reconstructionmethods of lower limbs from biplanar radiographs. In this thesis we have proposed andevaluated: 1) A 3D reconstruction method of the lower limbs based on parametric models andstatistical inferences. 2) A method for the auto-improvement of the 3D reconstruction of thelower limbs. This method combines image processing and the recalculation of the statisticalinferences. 3) Finally, methods based on similarity measures and shape criteria were used todetect automatically the medial and lateral side of the femur and tibia. The aim of thesemethods is to avoid the inversion of the femoral and tibial condyles in biplanar radiographs.These inversions have an impact in the calculation of clinical measurements, particularly thetorsional ones. The reconstruction method proposed in this thesis is already integrated withinthe sterEOS® software, available in 60 hospitals around the world. The methods developed inthis thesis have led us to a semi-automatic, accurate and robust reconstruction of lower limbs
Llucia, Ludovic. "Suivi d'objets à partir d'images issues de caméras mobiles non calibrées". Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22009/document.
Texto completo da fonteThis work refers to a 3D simulator that has for purpose to help football trainers interpreting tactical sequences based on real situations. This simulator has to be able to interpret video movies, to reconstruct a situation. The camera’s calibration state has to be as simple as possible. The first part of this document refers to the solution elaborated to implement this constraint whereas the second one is more oriented on the industrialisation process. These processes imply to focus on vision computing and ergonomics problems and to answer questions such as : how to characterize a homographic transformation matching the image and the model ? How to retrieve the position of the camera? Which area is part of the image? In an ergonomically point of view, the simulator has to reproduce the game play reality and to improve the abstraction and the communication of the coaches
Marre, Guilhem. "Développement de la photogrammétrie et d'analyses d'images pour l'étude et le suivi d'habitats marins". Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTG012.
Texto completo da fonteIn a context of climate change and the erosion of marine biodiversity, ecological monitoring of the most sensitive marine habitats is of paramount importance. In particular, there is a need for operational methods that enable decision-makers and managers to establish relevant conservation measures and to evaluate their effectiveness. TEMPO and RECOR are two monitoring networks focusing on Posidonia meadows and coralligenous reefs, the two richest and most sensitive habitats in the Mediterranean. The objective of this thesis is to meet the needs of effective monitoring of marine habitats by developing methods for assessing their health, based on two key image analysis methods: convolutional neural networks and photogrammetry. The results show that convolutional neural networks are capable of recognizing the main species of coralligenous assemblages in underwater photographs from RECOR, with a precision similar to that of an expert taxonomist. Furthermore, we have shown that photogrammetry can reproduce a marine habitat in three dimensions with a high degree of accuracy, sufficient for monitoring habitat structure and species distribution at a fine scale. Based on these reconstructions, we have developed a method for automatic mapping of Posidonia meadows, enabling temporal monitoring of the ecological quality of this sensitive habitat. Finally, we characterized the three-dimensional structure of coralligenous reefs based on their photogrammetric reconstructions and studied the links with the structuring of the assemblages that make them up. This PhD work has led to the development of operational methods that are now integrated into the TEMPO and RECOR monitoring networks. Results of this work paves the way for future research, in particular concerning characterization of the biological activity of coralligenous reefs thanks to the coupling of photogrammetry, neural networks and underwater acoustics
Gimenez, Lucile. "Outils numériques pour la reconstruction et l'analyse sémantique de représentations graphiques de bâtiments". Electronic Thesis or Diss., Paris 8, 2015. http://www.theses.fr/2015PA080047.
Texto completo da fonteMany buildings have to undergo major renovation to comply with regulations and environmental challenges. The BIM (Building Information Modeling) helps designers to make better-informed decisions, and results in more optimal energy-efficient designs. Such advanced design approaches require 3D digital models. However such models are not available for existing buildings. The aim of our work is to develop a method to generate 3D building models from existing buildings at low cost and in a reasonable time. We have chosen to work with 2D scanned plans. We assume that it is possible to find a paper plan for most buildings even if it is not always up-to-date and if the recognition quality is also dependent to the plan. The automatic reconstruction of a BIM from a paper plan is based on the extraction and identification of 3 main components: geometry (element shape), topology (links between elements) and semantics (object properties). During this process, some errors are generated which cannot be automatically corrected. This is why, we propose a novel approach based on punctual and guided human interventions to automatically identify and propose correction choices to the user to avoid error propagation.We describe the developed methodology to convert semi-automatically a 2D scanned plan into a BIM. A result analysis is done on 90 images. The following works is focused on the process genericity to test its robustness, the challenge of moving to scale and the multi-level management. The results highlight the pertinence of the error classification, identification and choices made to the user. The process is flexible in order to be completed by others data sources
Bugarin, Florian. "Vision 3D multi-images : contribution à l’obtention de solutions globales par optimisation polynomiale et théorie des moments". Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0068/document.
Texto completo da fonteThe overall objective of this thesis is to apply a polynomial optimization method, based on moments theory, on some vision problems. These problems are often nonconvex and they are classically solved using local optimization methods. Without additional hypothesis, these techniques don’t converge to the global minimum and need to provide an initial estimate close to the exact solution. Global optimization methods overcome this drawback. Moreover, the polynomial optimization based on moments theory could take into account particular constraints. In this thesis, we extend this method to the problems of minimizing a sum of many rational functions. In addition, under particular assumptions of "sparsity", we show that it is possible to deal with a large number of variables while maintaining reasonable computation times. Finally, we apply these methods to particular computer vision problems: minimization of projective distortions due to image rectification process, Fundamental matrix estimation, and multi-view 3D reconstruction with and without radial distortions
Merlin, Thibaut. "Reconstruction 4D intégrant la modélisation pharmacocinétique du radiotraceur en imagerie fonctionnelle combinée TEP/TDM". Thesis, Bordeaux 2, 2013. http://www.theses.fr/2013BOR22111/document.
Texto completo da fontePositron emission tomography (PET) is now considered as the gold standard and the main tool for the diagnosis and therapeutic monitoring of oncology patients, especially due to its quantitative aspects. With the advent of multimodal imaging in combined PET and X-ray CT systems, many methodological developments have been proposed in both pre-processing and data acquisition, image reconstruction, as well as post-processing in order to improve the quantification in PET imaging. Another important aspect of PET imaging is its high temporal resolution and ability to perform dynamic acquisitions, benefiting from the high sensitivity achieved with current systems. PET imaging allows measuring and visualizing changes in the biological distribution of radiopharmaceuticals within the organ of interest over time. This time tracking provides valuable information to physicians on underlying metabolic and physiological processes, which can be extracted using pharmacokinetic modeling. The objective of this project is, by taking advantage of dynamic data in PET/CT imaging, to develop a reconstruction method combining in a single process all the correction methodology required to accurately quantify PET data and, at the same time, include a pharmacokinetic model within the reconstruction in order to create parametric images for applications in oncology. In a first step, a partial volume effect correction methodology integrating, within the reconstruction process, the Lucy-Richardson deconvolution algorithm associated with a wavelet-based denoising method has been introduced. A second study focused on the development of a 4D reconstruction methodology performing temporal regularization of the dataset through a set of temporal basis functions, associated with a respiratory motion correction method based on an elastic deformation model. Finally, in a third step, the Patlak kinetic model has been integrated in a dynamic image reconstruction algorithm and associated with the respiratory motion correction methodology in order to allow the direct reconstruction of parametric images from dynamic thoracic datasets affected by the respiratory motion. The elastic transformation parameters derived for the motion correction have been estimated from respiratory-gated PET images according to the amplitude of the patient respiratory cycle. Monte-carlo simulations of two phantoms, a 4D geometrical phantom, and the anthropomorphic NCAT phantom integrating realistic time activity curves for the different tissues, have been performed in order to compare the performances of the proposed 4D parametric reconstruction algorithm with a standard 3D kinetic analysis approach. The proposed algorithm has then been assessed on clinical datasets of several patients with non small cell lung carcinoma. Finally, following the prior validation of the partial volume effect correction algorithm on one hand, and the 4D reconstruction incorporating the temporal regularization on the other hand, on simulated and clinical datasets, these two methodologies have been associated within the 4D reconstruction algorithm in order to optimize the estimation of image derived input functions. The results of this work show that the proposed direct parametric approach allows to maintain a similar noise level in the tumor regions when the statistic decreases, contrary to the 3D estimation approach for which the observed noise level increases. This result suggests interesting perspectives for the reduction of frame duration reduction of 4D reconstruction, allowing a reduction of the total 4D acquisition duration. In addition, the use of input function estimated with the developed temporal regularization methods led to the improvement of the Patlak parameters estimation. Finally, the elastic respiratory motion correction led to a diminution of the estimation bias of both Patlak parameters, in particular for small lesions located in regions affected by the respiratory motion
Ye, Jing. "Utilisation de mesures de champs thermique et cinématique pour la reconstruction de sources de chaleur thermomécaniques par inversion de l’équation d’advection-diffusion 1D". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0027/document.
Texto completo da fonteThis work concerns the way intrinsic observables can be produced, which are related to the thermomechanical behavior of materials and necessary for better formulation of state laws. These observables are Thermomechanical Heat Sources (THS) which are activated through mechanical excitation. These sources can be reconstructed both in space and time by the inversion of measured temperature fields obtained through IR thermography. We develop two main methods in this work which rely on spectral reduced approaches (one of them being the decomposition on Branch Modes) and both on a sequential inversion (Beck’s method) and an iterative one (Conjugated Gradient). Regarding the latter, we suggest to combine the standard approach with an efficient regularization method which comes from the filtering techniques based on TSVD. As we are concerned with materials which can be subjected to plastic instabilities (High Density PolyEthylene) for which local velocities of matter displacement can be non negligible, the inversion of the measurements must be performed with the advection-diffusion operator of heat transfer. It is then necessary to obtained additional knowledge: the velocity field. This one is measured by 3D Digital Image Correlation and we detail the experimental work we have carried out, which are based on tensile tests monitored with video-extensometry. We show that for quasi-static tests at relatively high strain rates, the advective effects are generally negligible. We also show the richness of the information brought by this dual thermomechanical (heat sources) and kinematical (strain-rates, velocities) information. It allows for a better understanding of the plastic instability (necking) dynamics. Lastly, we criticize the obtained results on THS reconstruction by the confrontation between the two algorithms and by a physical analysis of the observed phenomena
Tran, Dai viet. "Patch-based Bayesian approaches for image restoration". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD049.
Texto completo da fonteIn this thesis, we investigate the patch-based image denoising and super-resolution under the Bayesian Maximum A Posteriori framework, with the help of a set of high quality images which are known as standard images. Our contributions are to address the construction of the dictionary, which is used to represent image patches, and the prior distribution in dictionary space. We have demonstrated that the careful selection of dictionary to represent the local information of image can improve the image reconstruction. By establishing an exhaustive dictionary from the standard images, our main attribute is to locally select a sub-dictionary of matched patches to recover each patch in the degraded image. Beside the conventional Euclidean measure, we propose an effective similarity metric based on the Earth Mover's Distance (EMD) for image patch-selection by considering each patch as a distribution of image intensities. Our EMD-based super-resolution algorithm has outperformed comparing to some state-of-the-art super-resolution methods.To enhance the quality of image denoising, we exploit the distribution of patches in the dictionary space as a an image prior to regularize the optimization problem. We develop a computationally efficient procedure, based on piece-wise constant function estimation, for low dimension dictionaries and then proposed a Gaussian Mixture Model (GMM) for higher complexity dictionary spaces. Finally, we justify the practical number of Gaussian components required for recovering patches. Our researches on multiple datasets with combination of different dictionaries and GMM models have complemented the lack of evidence of using GMM in the literature
Bauchet, Jean-Philippe. "Structures de données cinétiques pour la modélisation géométrique d’environnements urbains". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4091.
Texto completo da fonteThe geometric modeling of urban objects from physical measurements, and their representation in an accurate, compact and efficient way, is an enduring problem in computer vision and computer graphics. In the literature, the geometric data structures at the interface between physical measurements and output models typically suffer from scalability issues, and fail to partition 2D and 3D bounding domains of complex scenes. In this thesis, we propose a new family of geometric data structures that rely on kinetic frameworks. More precisely, we compute partitions of bounding domains by detecting geometric shapes such as line-segments and planes, and extending these shapes until they collide with each other. This process results in light partitions, containing a low number of polygonal cells. We propose two geometric modeling pipelines, one for the vectorization of regions of interest in images, another for the reconstruction of concise polygonal meshes from point clouds. Both approaches exploit kinetic data structures to decompose efficiently either a 2D image domain or a 3D bounding domain into cells. Then, we extract objects from the partitions by optimizing a binary labelling of cells. Conducted on a wide range of data in terms of contents, complexity, sizes and acquisition characteristics, our experiments demonstrate the scalability and the versatility of our methods. We show the applicative potential of our method by applying our kinetic formulation to the problem of urban modeling from remote sensing data
Carlavan, Mikaël. "Optimization of the compression/restoration chain for satellite images". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00847182.
Texto completo da fonteLeong-Hoï, Audrey. "Etude des techniques de super-résolution latérale en nanoscopie et développement d'un système interférométrique nano-3D". Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD048/document.
Texto completo da fonteThis manuscript presents the study of the lateral super-resolution techniques in optical nanoscopy, which is a new high-resolution imaging method now widely used in biophysics and medical imaging, to observe and measure nanostructures, with the advantages of far field optical imaging, such as a large field of view, visualization and analysis in real time…One of the future challenges of 3D super resolution microscopy is to avoid the use of fluorescent markers. Interferometric microscopy is a 3D label-free imaging technique enabling the detection of nanostructures. To improve the detection capability of this optical system, a first version of a protocol composed of image processing methods was developed and implemented, revealing structures initially unmeasurable. Then, to improve the lateral resolution of the system, a new technique combining interferometry and the principle of the photonic nano-jet has been developed, thus allowing the observation of objects of a size smaller than the diffraction limit of the optical instrument
Deregnaucourt, Thomas. "Prédiction spatio-temporelle de surfaces issues de l'imagerie en utilisant des processus stochastiques". Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC088.
Texto completo da fonteThe prediction of a surface is now an important problem due to its use in multiple domains, such as computer vision, the simulation of avatars for cinematography or video games, etc. Since a surface can be static or dynamic, i.e. evolving with time, this problem can be separated in two classes: a spatial prediction problem and a spatio-temporal one. In order to propose a new approach for each of these problems, this thesis works have been separated in two parts.First of all, we have searched to predict a static surface, which is supposed cylindrical, knowing it partially from curves. The proposed approach consisted in deforming a cylinder on the known curves in order to reconstruct the surface of interest. First, a correspondence between known curves and the cylinder is generated with the help of shape analysis tools. Once this step done, an interpolation of the deformation field, which is supposed Gaussian, have been estimated using maximum likelihood and Bayesian inference. This methodology has then been applied to real data from two domains of imaging: medical imaging and infography. The obtained results show that the proposed approach exceeds the existing methods in the literature, with better results using Bayesian inference.In a second hand, we have been interested in the spatio-temporal prediction of dynamic surfaces. The objective was to predict a dynamic surface based on its initial surface. Since the prediction needs to learn on known observations, we first have developed a spatio-temporal surface analysis tool. This analysis is based on shape analysis tools, and allows a better learning. Once this preliminary step done, we have estimated the temporal deformation of the dynamic surface of interest. More precisely, an adaptation, with is usable on the space of surfaces, of usual statistical estimators has been used. Using this estimated deformation on the initial surface, an estimation of the dynamic surface has been created. This process has then been applied for predicting 4D expressions of faces, which allow us to generate visually convincing expressions
Cafaro, Alexandre. "AI-Driven Adaptive Radiation Treatment Delivery for Head & Neck Cancers". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASL103.
Texto completo da fonteHead and neck cancer (HNC) is one of the most challenging cancers to treat due to its complex anatomy and significant patient-specific changes during treatment. As the 6th most common cancer worldwide, HNC often has a poor prognosis due to late diagnosis and the lack of reliable predictive markers. Radiation therapy, typically combined with surgery, faces challenges such as inter-observer variability, complex treatment planning, and anatomical changes throughout the treatment process.Adaptive radiotherapy is essential to maintain precision as the patient's anatomy evolves during treatment. However, current low-invasive imaging methods before each treatment fraction, such as Cone Beam CT (CBCT) and biplanar X-rays, are limited in quality or provide only 2D images, making daily treatment adaptation challenging. This thesis introduces novel deep learning approaches to reconstruct accurate 3D CT images from biplanar X-rays, enabling adaptive radiotherapy that reduces radiation dose, shortens acquisition times, lowers costs, and improves treatment precision.Reconstructing 3D volumes from biplanar X-rays is inherently challenging due to the limited information provided by only two projections, leading to significant ambiguity in capturing internal structures. To address this, the thesis incorporates anatomical and deformation priors through deep learning, significantly improving reconstruction accuracy despite the very sparse measurements.The first method, X2Vision, is an unsupervised approach that uses generative models trained on head and neck CT scans to learn the distribution of head and neck anatomies. It optimizes latent vectors to generate 3D volumes that align with both biplanar X-rays and anatomical priors. By leveraging these priors and navigating the anatomical manifold, X2Vision dramatically reduces the ill-posed nature of the reconstruction problem, achieving accurate results even with just two projections.In radiotherapy, pre-treatment scans such as CT or MRI are typically available and are essential for improving reconstructions by accounting for anatomical changes over time. To make use of this data, we developed XSynthMorph, a method that integrates patient-specific features from pre-acquired planning CT scans. By combining anatomical and deformation priors, XSynthMorph adjusts for changes like weight loss, non-rigid deformations, or tumor regression. This approach enables more robust and personalized reconstructions, providing an unprecedented level of precision and detail in capturing 3D structures.We explored the clinical potential of X2Vision and XSynthMorph, with preliminary clinical evaluations demonstrating their effectiveness in patient positioning, structure retrieval, and dosimetry analysis, highlighting their promise for daily adaptive radiotherapy. To bring these methods closer to clinical reality, we developed an initial approach to integrate them into real-world biplanar X-ray systems used in radiotherapy.In conclusion, this thesis demonstrates the feasibility of adaptive radiotherapy using only biplanar X-rays. By combining generative models, deformation priors, and pre-acquired scans, we have shown that high-quality 3D reconstructions can be achieved with minimal radiation exposure. This work paves the way for daily adaptive radiotherapy, offering a low-invasive, cost-effective solution that enhances precision, reduces radiation exposure, and improves overall treatment efficiency
Guénard, Jérôme. "Synthèse de modèles de plantes et reconstructions de baies à partir d’images". Phd thesis, 2013. http://oatao.univ-toulouse.fr/14710/1/guenard.pdf.
Texto completo da fonteMerouche, Samir. "Suivi des vaisseaux sanguins en temps réel à partir d’images ultrasonores mode-B et reconstruction 3D : application à la caractérisation des sténoses artérielles". Thèse, 2013. http://hdl.handle.net/1866/12733.
Texto completo da fonteLocating and quantifying stenosis length and severity are essential for planning adequate treatment of peripheral arterial disease (PAD). To do this, clinicians use imaging methods such as ultrasound (US), Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA). However, US examination cannot provide maps of entire lower limb arteries in 3D, MRA is expensive and invasive, CTA is ionizing and also invasive. We propose a new 3D-US robotic system with B-mode images, which is non-ionizing, non-invasive, and is able to track and reconstruct in 3D the superficial femoral artery from the iliac down to the popliteal artery, in real time. In vitro, 3D-US reconstruction was evaluated for simple and complex geometries phantoms in comparison with their computer-aided-design (CAD) file in terms of lengths, cross sectional areas and stenosis severity. In addition, for the phantom with a complex geometry, an evaluation was realized using Hausdorff distance, cross-sectional area and stenosis severity in comparison with 3D reconstruction with CTA. A mean Hausdorff distance of 0.97± 0.46 mm was found for 3D-US compared to 3D-CTA vessel representations. In vitro investigation to evaluate stenosis severity when compared with the original phantom CAD file showed that 3D-US reconstruction, with 3%-6% error, is better than 3D-CTA reconstruction, with 4-13% error. The in vivo system’s feasibility to reconstruct a normal femoral artery segment of a volunteer was also investigated. All of these promising results show that our ultrasound robotic system is able to track automatically the vessel and reconstruct it in 3D as well as CTA. Clinically, our system will allow firstly to the radiologist to have 3D images readily interpretable and secondly, to avoid radiation and contrast agent for patients.