Dissertations / Theses on the topic 'Optimisation combinatoire – Reconstruction d'image'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 44 dissertations / theses for your research on the topic 'Optimisation combinatoire – Reconstruction d'image.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ribal, Christophe. "Anisotropic neighborhoods of superpixels for thin structure segmentation." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG117.
Full textIn the field of computer vision, image segmentation aims at decomposing an image into homogeneous regions. While usually an image is composed of a regular lattice of pixels, this manuscript proposes through the term of site a generic approach able to consider either pixels or superpixels. Robustness to noise in this challenging inverse problem is achieved by formulating the labels as a Markov Random Field, and finding an optimal segmentation under the prior that labels should be homogeneous inside the neighborhood of a site. However, this regularization of the solution introduces unwanted artifacts, such as the early loss of thin structures, defined as structures whose size is small in at least one dimension. Anisotropic neighborhood construction fitted to thin structures allows us to tackle the mentioned artifacts. Firstly, the orientations of the structures in the image are estimated from any of the three presented options: The minimization of an energy, Tensor Voting, and RORPO. Secondly, four methods for constructing the actual neighborhood from the orientation maps are proposed: Shape-based neighborhood, computed from the relative positioning of the sites, dictionary-based neighborhood, derived from the discretization to a finite number of configurations of neighbors for each site, and two path-based neighborhoods, namely target-based neighborhood with fixed extremities, and cardinal-based neighborhood with fixed path lengths. Finally, the results provided by the Maximum A Posteriori criterion (computed with graph cuts optimization) with these anisotropic neighborhoods are compared against isotropic ones on two applications: Thin structure detection and depth reconstruction in Shape From Focus. The different combinations of guidance map estimations and neighborhood constructions are illustrated and evaluated quantitatively and qualitatively in order to exhibit the benefits of the proposed approaches
Tlig, Ghassen. "Programmation mathématique en tomographie discrète." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00957445.
Full textTlig, Ghassen. "Programmation mathématique en tomographie discrète." Electronic Thesis or Diss., Paris, CNAM, 2013. http://www.theses.fr/2013CNAM0886.
Full textThe tomographic imaging problem deals with reconstructing an objectfrom a data called a projections and collected by illuminating the objectfrom many different directions. A projection means the information derivedfrom the transmitted energies, when an object is illuminated from a particularangle. The solution to the problem of how to reconstruct an object fromits projections dates to 1917 by Radon. The tomographic reconstructingis applicable in many interesting contexts such as nondestructive testing,image processing, electron microscopy, data security, industrial tomographyand material sciences.Discete tomography (DT) deals with the reconstruction of discret objectfrom limited number of projections. The projections are the sums along fewangles of the object to be reconstruct. One of the main problems in DTis the reconstruction of binary matrices from two projections. In general,the reconstruction of binary matrices from a small number of projections isundetermined and the number of solutions can be very large. Moreover, theprojections data and the prior knowledge about the object to reconstructare not sufficient to determine a unique solution. So DT is usually reducedto an optimization problem to select the best solution in a certain sense.In this thesis, we deal with the tomographic reconstruction of binaryand colored images. In particular, research objectives are to derive thecombinatorial optimization techniques in discrete tomography problems
Soldo, Yan. "Optimisation de la reconstruction d'image pour SMOS et SMOS-NEXT." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2120/.
Full textThe Soil Moisture and Ocean Salinity (SMOS) satellite was launched by the European Space Agency (ESA) in November 2009 to allow a better understanding of Earth's climate, the water cycle and the availability of water resources at the global scale, by providing global maps of soil moisture and ocean salinity. SMOS' payload, an interferometric radiometer, measures Earth's natural radiation in the protected 1400-1427 MHz band (microwave, L-band). However, since launch the presence of numerous Radio-Frequency Interferences (RFI) has been clearly observed, despite the International Telecommunication Union (ITU) recommendations to preserve this band for scientific use. The pollution created by these artificial signals leads to a significant loss of data and a common effort of ESA and the national authorities is necessary in order to identify and switch off the emitters. From a scientific point of view we focus on the development of algorithms for the detection of RFI, their localization on the ground and the mitigation of the signals they introduce to the SMOS data. These objectives have led to different approaches that are proposed in this contribution. The ideal solution would consist in mitigating the interference signals by creating synthetic signals corresponding to the interferences and subtract them from the actual measurements. For this purpose, an algorithm was developed which makes use of a priori information on the natural scene provided by meteorological models. Accounting for this information, it is possible to retrieve an accurate description of the RFI from the visibilities between antennas, and therefore create the corresponding signal. Even though assessing the performances of a mitigation algorithm for SMOS is not straightforward as it has to be done indirectly, different methods are proposed and they all show a general improvement of the data for this particular algorithm. Nevertheless due to the complexity of assessing the performances at the global scale, and the uncertainty inevitably introduced along with the synthetic signal, and to avoid a naive use of the mitigated data by the end user, for the time being an operational implementation of mitigation algorithms is not foreseen. Instead, an intermediate solution is proposed which consists of estimating the RFI contamination for a given snapshot, and then creating a map of the regions that are contaminated to less than a certain (or several) threshold(s). Another goal has been to allow the characterization of RFI (location on the ground, power emitted, position in the field of view) within a specified geographic zone in a short time. This approach uses the Fourier components of the observed scene to evaluate the brightness temperature spatial distribution in which the RFIs are represented as "hot spots". This algorithm has proven reliable, robust and precise, so that it can be used for the creation of RFI databases and monitoring of the RFI contamination at the local and global scale. Such databases were in fact created and used to highlight systematic errors of the instrument and seasonal variation of the localization results. The second main research topic has been to investigate the principle of SMOS-NEXT, a prospective mission that aims at assuring the continuity of space-borne soil moisture and ocean salinity measurements in the future with significantly improved spatial resolution of the retrievals. In order to achieve the latter this project intends to implement a groundbreaking interferometric approach called the spatio-temporal aperture synthesis. This technique consists in correlating the signals received at antennas in different places at different times, within the coherence limits imposed by the bandwidth. To prove the feasibility of this technique, a measurement campaign was carried out at the radio-telescope in Nançay, France. Even though the analysis of the experimental data has not allowed concluding on the validity of the measurement principle, a series of difficulties have been highlighted and the thus gained knowledge constitutes a valuable base for the foreseen second measurement campaign
Israel-Jost, Vincent. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec colimateur sténopé." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/ISRAEL-JOST_Vincent_2006.pdf.
Full textIn SPECT small animal imaging, it is highly recommended to accurately model the response of the detector in order to improve the low spatial resolution. The volume to reconstruct is thus obtained both by backprojecting and deconvolving the projections. We chose iterative methods, which permit one to solve the inverse problem independently from the model's complexity. We describe in this work a gaussian model of point spread function (PSF) whose position, width and maximum are computed according to physical and geometrical parameters. Then we use the rotation symmetry to replace the computation of P projection operators, each one corresponding to one position of the detector around the object, by the computation of only one of them. This is achieved by choosing an appropriate polar discretization, for which we control the angular density of voxels to avoid oversampling the center of the field of view. Finally, we propose a new family of algorithms, the so-called frequency adapted algorithms, which enable to optimize the reconstruction of a given band in the frequency domain on both the speed of convergence and the quality of the image
Israel-Jost, Vincent Sonnendrücker Eric Constantinesco André. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec colimateur sténopé." Strasbourg : Université Louis Pasteur, 2007. http://eprints-scd-ulp.u-strasbg.fr:8080/698/01/israel-jost2006.pdf.
Full textBus, Norbert. "The use of geometric structures in graphics and optimization." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1117/document.
Full textReal-world data has a large geometric component, showing significant geometric patterns. How to use the geometric nature of data to design efficient methods has became a very important topic in several scientific fields, e.g., computational geometry, discrete geometry, computer graphics, computer vision. In this thesis we use geometric structures to design efficient algorithms for problems in two domains, computer graphics and combinatorial optimization. Part I focuses on a geometric data structure called well-separated pair decomposition and its usage for one of the most challenging problems in computer graphics, namely efficient photo-realistic rendering. One solution is the family of many-lights methods that approximate global illumination by individually computing illumination from a large number of virtual point lights (VPLs) placed on surfaces. Considering each VPL individually results in a vast number of calculations. One successful strategy the reduce computations is to group the VPLs into a small number of clusters that are treated as individual lights with respect to each point to be shaded. We use the well-separated pair decomposition of points as a basis for a data structure for pre-computing and compactly storing a set of view independent candidate VPL clusterings showing that a suitable clustering of the VPLs can be efficiently extracted from this data structure. We show that instead of clustering points and/or VPLs independently what is required is to cluster the product-space of the set of points to be shaded and the set of VPLs based on the induced pairwise illumination. Additionally we propose an adaptive sampling technique to reduce the number of visibility queries for each product-space cluster. Our method handles any light source that can be approximated with virtual point lights (VPLs), highly glossy materials and outperforms previous state-of-the-art methods. Part II focuses on developing new approximation algorithms for a fundamental NP-complete problem in computational geometry, namely the minimum hitting set problem with particular focus on the case where given a set of points and a set of disks, we wish to compute the minimum-sized subset of the points that hits all disks. It turns out that efficient algorithms for geometric hitting set rely on a key geometric structure, called epsilon-net. We give an algorithm that uses only Delaunay triangulations to construct epsilon-nets of size 13.4/epsilon and we provide a practical implementation of a technique to calculate hitting sets in near-linear time using small sized epsilon-nets. Our results yield a 13.4 approximation for the hitting set problem with an algorithm that runs efficiently even on large data sets. For smaller datasets, we present an implementation of the local search technique along with tight approximation bounds for its approximation factor, yielding an (8 + epsilon)-approximation algorithm with running time O(n^{2.34})
Israel-Jost, Vincent. "Optimisation de la reconstruction en tomographie d'émission monophotonique avec collimateur sténopé." Phd thesis, Université Louis Pasteur - Strasbourg I, 2006. http://tel.archives-ouvertes.fr/tel-00112526.
Full textPour parvenir à des résultats exploitables, tant en terme de résolution spatiale que de temps de calcul chez le rat ou la souris, nous décrivons dans ce travail les choix de notre modélisation par une réponse impulsionnelle gaussienne, ajustée suivant des paramètres physiques et géométriques. Nous utilisons ensuite la symétrie de rotation inhérente au dispositif pour ramener le calcul de P opérateurs de projections au calcul d'un seul d'entre eux, par une discrétisation de l'espace compatible avec cette symétrie, tout en contrôlant la densité angulaire de voxels pour éviter un suréchantillonnage au centre du volume.
Enfin, nous proposons une nouvelle classe d'algorithmes adaptés à la fréquence qui permettent d'optimiser la reconstruction d'une gamme de fréquence spatiale donnée, évitant ainsi d'avoir à calculer de nombreuses itérations lorsque le spectre à reconstruire se retrouve surtout dans les hautes fréquences.
Favier, Aurélie. "Décompositions fonctionnelles et structurelles dans les modèles graphiques probabilistes appliquées à la reconstruction d'haplotypes." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1527/.
Full textThis thesis is based on two topics : the decomposition in graphical models which are, among others, Bayesian networks and cost function networks (WCSP) and the haplotype reconstruction in pedigrees. We apply techniques of WCSP to treat Bayesian network. We exploit stuctural and fonctional properties, in an exact and approached methods. Particulary, we define a decomposition of function which produces functions with a smaller variable number. An application example in optimization is the haplotype reconstruction. It is essential for a best prediction of seriousness of disease or to understand particular physical characters. Haplotype reconstruction is represented with a Bayesian network. The functionnal decomposition allows to reduce this Bayesian network in an optimization problem WCSP (Max-2SAT)
Ribés, Cortés Alejandro. "Analyse multispectrale et reconstruction de la réflectance spectrale de tableaux de maître." Paris, ENST, 2003. https://pastel.archives-ouvertes.fr/pastel-00000761.
Full textThis thesis is devoted to 1 The methods for spectral reflectance reconstruction of the surface of coloured materials in each pixel of an N-channel multispectral image (N > 3). We propose a classification of spectral reflectance reconstruction methods and we improve some of them. We adapted Mixture Density Networks (MDN) to spectral reconstruction. MDN were enriched by developing an automatic system of architecture selection. 2 We studied, characterised and evaluated a new high definition multispectral camera created for the European project CRISATEL. An automatic calibration procedure and an image correction system were conceived and implemented. We also worked on the selection of the optical filters most adapted to the spectral reconstruction of a specific material. Finally, the work completed during this thesis was applied to art paintings. We present two examples, painted by George de la Tour and Renoir
LOBEL, PIERRE, and Michel Barlaud. "Problemes de diffraction inverse : reconstruction d'image et optimisation avec regularisation par preservation des discontinuites - application a l'imagerie microonde." Nice, 1996. http://www.theses.fr/1996NICE4989.
Full textKhazâal, Ali. "Reconstruction d'images pour la mission spatiale SMOS." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/917/.
Full textSynthetic aperture imaging radiometers are powerful sensors for high-resolution observations of the Earth at low microwave frequencies. Within this context, the European Space Agency is currently developing the Soil Moisture and Ocean Salinity (SMOS) mission devoted to the monitoring of SMOS at global scale from L-band space-borne radiometric observations obtained with a 2-D interferometer. This PhD is concerned with the reconstruction of radiometric brightness temperature maps from interferometric measurements through a regularization approach called Band Limited Regularization. More exactly, it concerns with the reduction of the systematic error (or bias) in the reconstruction of radiometric brightness temperature maps from SMOS interferometric measurements. It also extends the concept of "band-limited regularization approach" to the case of the processing of dual and full polarimetric data. Also, two problems that may affect the quality of the reconstruction are investigated. First, the impact of correlators and receivers failures on the reconstruction process is studied. Then, the calibration of MIRAS antenna's voltage patterns, when the instrument is in orbit, is also studied where a general approach is proposed to estimate this antenna's patterns
Jin, Qiyu. "Restauration des images et optimisation des poids." Lorient, 2012. http://www.theses.fr/2012LORIS259.
Full textIn this these, we present and study some methods for restoring the original image from an image degraded by additive noise or blurring. The these is composed of 4 parts,and 6 chapters. Part I. In this part, we deal with the additive Gaussian white noise. This part is divided into two chapters : Chapter 1 and Chapter 2. In Chapter 1, a new image denoisingalgorithm to deal with the additive Gaussian white noise model is given. Like the nonlocal means method, we propose the filter which is based on the weighted average of the observations in a neighborhood, with weights depending on the similarity of local patches. But in contrast to the non-local means filter, instead of using a fixed Gaussian kernel, we propose to choose the weights by minimizing a tight upper bound of mean square error. This approach makes it possible to define the weights adapted to the function at hand, mimicking the weights of the oracle filter. Under some regularity conditions on the target image, we show that the obtained estimator converges at the usual optimal rate. The proposed algorithm is parameter free in the sense that it automatically calculates the bandwidth of the smoothing kernel. In Chapter 2, we employ the techniques of oracle estimation to explain how to determine he widths of the similarity patch and of the search window. We justify the Non-Local Means algorithm by proving its convergence under some regularity conditions on the target image. Part II. This part has two chapters. In Chapter 3, we propose an image denoising algorithm for the data contaminated by the Poisson noise. Our key innovations are : Firstly, we handle the Poisson noise directly, i. E. We do not transform Poisson distributed noise into a Gaussian distributed one with constant variance ; secondly, we get the weights of estimates by minimizing the a tight upper bound of the mean squared error. In Chapter 4, we adapt the Non-Local means to restore the images contaminated by the Poisson noise. Part III. This part is dedicated to the problem of reconstructing the image from data contaminated by mixture of Gaussian and impulse noises (Chapter 5). In Chapter 5, we modify the Rank-Ordered Absolute differences (ROAD) statistic in order to detect more effectively the impulse noise in the mixture of impulse and Gaussian noises. Combining it with the method of Optimal Weights, we obtain a new method to deal with the mixed noise, called Optimal Weights Mixed Filter. Part IV. The final part focuses on inverse problems of the flou images, and it includes Chapter 6. This chapter provides a new algorithm for solving inverse problems, based on the minimization of the L2 norm and on the control of the total variation. It consists in relaxing the role of the total variation in the classical total variation minimization approach, which permits us to get better approximation to the inverse problems. Theix numerical results on the deconvolution problem show that our method outperforms some previous ones
Gambette, Philippe. "Méthodes combinatoires de reconstruction de réseaux phylogénétiques." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00608342.
Full textNguyen, Trong Phuc. "Techniques d'optimisation en traitement d'image et vision par ordinateur et en transport logistique." Metz, 2007. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/2007/Nguyen.Trong_Phue.SMZ0719.pdf.
Full textThe works presented in this thesis are related to new techniques of optimization for the solution of four important problems of two fields : transport logistics, computer vision and image processing. They are nonconvex optimization problems of very large dimensions for which the research of good solution methods is always of actuality. Our work is based mainly on the DC programming and DCA that have been successfully applied in various fields of applied sciences, including machine learning. It is motivated and justified by the robustness and the good performance of DC programming and DCA in comparison with the existing methods. The thesis is divided into five chapters. The first chapter, which serves as a reference for other chapters, presents some tools from the bases of DC programming and DCA. The second chapter discusses the problem of supply chain design at the strategic level, which can be formulated as a mixed integer linear program. We treat this problem by DC algorithms via exact penalty on the basis of suitable DC decompositions and a technique for finding a good starting point. The third chapter is concerned with the discrete tomography applied to the problem of binary image reconstruction. This problem is solved by three different approaches based on DCA. In the fourth chapter, we study the problem of image segmentation using Fuzzy C-Means clustering via DCA. A nonlinear estimate of the fundamental matrix in computer vision by the trust-region method based on the truncated conjugate gradient method or DCA is developed in the final chapter
Prigent, Sylvain. "Complétion combinatoire pour la reconstruction de réseaux métaboliques, et application au modèle des algues brunes Ectocarpus siliculosus." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S077/document.
Full textIn this thesis we focused on the development of a comprehensive approach to reconstruct metabolic networks applied to unconventional biological species for which we have little information. Traditionally, this reconstruction is based on three points : the creation of a metabolic draft from a genome, the completion of this draft and the verification of the results. We have been particularly interested in the hard combinatorial optimization problem represented by the gap-filling step. We used Answer Set Programming (or ASP) to solve this combinatorial problem. Changes to an existing method allowed us to improve both the computational time and the quality of modeling. This entire process of metabolic network reconstruction was applied to the model of brown algae, Ectocarpus siliculosus, allowing us to reconstruct the first metabolic network of a brown macro-algae. The reconstruction of this network allowed us to improve our understanding of the metabolism of this species and to improve annotation of its genome
Vazquez, ortiz Karla Esmeralda. "Advanced methods to solve the maximum parsimony problem." Thesis, Angers, 2016. http://www.theses.fr/2016ANGE0015/document.
Full textPhylogenetic reconstruction is considered a central underpinning of diverse fields like ecology, molecular biology and physiology where genealogical relationships of species or gene sequences represented as trees can provide the most meaningful insights into biology. Maximum Parsimony (MP) is an important approach to solve the phylogenetic reconstruction based on an optimality criterion under which the tree that minimizes the total number of genetic transformations is preferred. In this thesis we propose different methods to cope with the combinatorial nature of this NP-complete problem. First we present a competitive Simulated Annealing algorithm which helped us find trees of better parsimony score than the ones that were known for a set of instances. Second, we propose a Path-Relinking technique that appears to be suitable for tree comparison but not for finding trees of better quality. Third, we give a GPU implementation of the objective function of the problem that can reduce the runtime for instances that have an important number of residues per taxon. Finally, we introduce a predictor that is able to estimate the best parsimony score of a huge set of instances with a high accuracy
Tairi, Souhil. "Développement de méthodes itératives pour la reconstruction en tomographie spectrale." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0160/document.
Full textIn recent years, hybrid pixel detectors have paved the way for the development of spectral X ray tomography or spectral tomography (CT). Spectral CT provides more information about the internal structure of the object compared to conventional absorption CT. One of its objectives in medical imaging is to obtain images of components of interest in an object, such as biological markers called contrast agents (iodine, barium, etc.).The state of the art of simultaneous reconstruction and separation of spectral CT data methods remains to this day limited. Existing reconstruction approaches are limited in their performance and often do not take into account the complexity of the acquisition model.The main objective of this thesis work is to propose better quality reconstruction approaches that take into account the complexity of the model in order to improve the quality of the reconstructed images. Our contribution considers the non-linear polychromatic model of the X-ray beam and combines it with an earlier model on the components of the object to be reconstructed. The problem thus obtained is an inverse, non-convex and misplaced problem of very large dimensions.To solve it, we propose a proximal algorithmwith variable metrics. Promising results are shown on real data. They show that the proposed approach allows good separation and reconstruction despite the presence of noise (Gaussian or Poisson). Compared to existing approaches, the proposed approach has advantages over the speed of convergence
Weiss, Pierre. "Algorithmes rapides d'optimisation convexe : applications à la restauration d'images et à la détection de changements." Nice, 2008. http://www.theses.fr/2008NICE4032.
Full textThis PhD contains contributions in numerical analysis and in computer vision. The talk will be divided in two parts. In the first part, we will focus on the fast resolution, using first order methods, of convex optimization problems. Those problems appear naturally in many image processing tasks like image reconstruction, compressed sensing or texture+cartoon decompositions. They are generally non differentiable or ill-conditioned. We show that they can be solved very efficiently using fine properties of the functions to be minimized. We analyze in a systematic way their convergence rate using recent results due to Y. Nesterov. To our knowledge, the proposed methods correspond to the state of the art of the first order methods. In the second part, we will focus on the problem of change detection between two remotely sensed images taken from the same location at two different times. One of the main difficulty to solve this problem is the differences in the illumination conditions between the two shots. This leads us to study the level line illumination invariance. We completely characterize the 3D scenes which produce invariant level lines. We show that they correspond quite well to urban scenes. Then we propose a variational framework and a simple change detection algorithm which gives satisfying results both on synthetic OpenGL scenes and real Quickbird images
Jolivet, Frederic. "Approches "problèmes inverses" régularisées pour l'imagerie sans lentille et la microscopie holographique en ligne." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES012/document.
Full textIn Digital Imaging, the regularized inverse problems methods reconstruct particular information from measurements and an image formation model. With an inverse problem that is ill-posed and illconditioned, and with the used image formation mode! having few constraints, it is necessary to introduce a priori conditions in order to restrict ambiguity for the inversion. This allows us to guide the reconstruction towards a satisfying solution. The works of the following thesis delve into the development of reconstruction algorithms of digital holograms based on large-scale optimization methods (smooth and non-smooth). This general framework allowed us to propose different approaches adapted to the challenges found with this unconventional imaging technique: the super-resolution, reconstruction outside the sensor's field, the color holography and finally, the quantitative reconstruction of phase abjects (i.e. transparent). For this last case, the reconstruction problem consists of estimating the complex 2D transmittance of abjects having absorbed and/or dephased the light wave during the recording of the hologram. The proposed methods are validated with the help of numerical simulations that are then applied on experimental data taken from the lensless imaging or from the in-line holographie microscopy (coherent imaging in transmission, with a microscope abject glass). The applications range from the reconstruction of opaque resolution sights, to the reconstruction of biological objects (bacteria), passing through the reconstruction of evaporating ether droplets from a perspective of turbulence study in fluid mechanics
Gaullier, Gil. "Modèles déformables contraints en reconstruction d'images de tomographie non linéaire par temps d'arrivée." Thesis, Strasbourg, 2013. http://www.theses.fr/2013STRAD026/document.
Full textImage reconstruction from first time arrival is a difficult task due to its ill-posedness nature and to the non linearity of the direct problem associated. In this thesis, the purpose is to use a deformable model because it enables to introduce a global shape prior on the objects to reconstruct, which leads to more stable solutions with better quality. First, high level shape constraints are introduced in Computerized Tomography for which the direct problem is linear. Secondly, different strategies to solve the image reconstruction problem with a non linearity hypothesis are considered. The chosen strategy approximates the direct problem by a series of linear problems, which leads to a simple successive minimization algorithm with the introduction of the shape prior along the minimization. The efficiency of the method is demonstrated for simulated data as for real data obtained from a specific measurement device developped by IFSTTAR for non destructive evaluation of civil engineering structures
Peyrot, Jean-Luc. "Optimisation de la chaîne de numérisation 3D : de la surface au maillage semi-régulier." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4126/document.
Full textNowadays, 3D digitization systems generate numeric representations that are both realistic and of high geometric accuracy with respect to real surfaces. However, this geometric accuracy, obtained by oversampling surfaces, increases significantly the generated amount of data. Consequently, the resulting meshes are very dense, and not suitable to be visualized, transmitted or even stored efficiently. Nevertheless, the semi-regular representation due to its scalable and compact representation, overcomes this problem. This thesis aims at optimizing the classic 3D digitization chain, by first improving the sampling of surfaces while preserving geometric features, and secondly shortening the number of required treatments to obtain such semi-regular meshes. To achieve this goal, we integrated in a stereoscopic system the Poisson-disk sampling that realizes a good tradeoff between undersampling and oversampling, thanks to its blue noise properties. Then, we produced a semi-regular meshing technique that directly works on the stereoscopic images, and not on a meshed version of point clouds, which are usually generated by such 3D scanners. Experimental results prove that our contributions efficiently generate semi-regular representations, which are accurate with respect to real surfaces, while reducing the generated amount of data
Sghaier, Maissa. "Clinical-task based reconstruction in Digital Breast Tomosynthesis." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG040.
Full textThe reconstruction of a volumetric image from Digital Breast Tomosynthesis (DBT)measurements is an ill-posed inverse problem, for which existing iterative regularizedapproaches can provide a good solution. However, the clinical task is somehow omittedin the derivation of those techniques, although it plays a primary role in the radiologistdiagnosis. In this work, we address this issue by introducing a novel variational formulationfor DBT reconstruction. Our approach is tailored for a specific clinical task, namely the detection of microcalcifications. Our method aims at simultaneously enhancing the detectionperformance and enabling a high-quality restoration of the background breast tissues.First, we propose an original approach aiming at enhancing the detectability of microcalcifications in DBT reconstruction. Thus, we formulate a detectability function inspired from mathematical model observers. Then, we integrate it in a cost function which is minimized for 3D reconstruction of DBT volumes. Experimental results demonstrate the interest of our approach in terms of microcalcification detectability.In a second part, we introduce the Spatially Adaptive Total Variation (SATV) as a new regularization strategy applied to DBT reconstruction, in addition to the detectability function. Hence, an original formulation for the weighted gradient field is introduced, that efficiently incorporates prior knowledge on the location of small objects. Then, we derive our SATV regularization, and incorporate it in our proposed 3D reconstruction approach for DBT. We carry out several experiments, in which SATV regularizer shows a promising improvement with respect to state-of-the-art regularization methods.Third, we investigate the application of Majorize Minimize Memory Gradient (3MG) algorithm to our proposed reconstruction approach. Thus, we suggest two numerical improvements to boost the speed of the reconstruction scheme. Then, we assess the numerical performance of 3MG by comparing the convergence speed of the proposed method with state-of-the-art convex optimization algorithms.The last part of this thesis is focused on the quantitative assessment of the contribution of our proposed DBT reconstruction. Thus, we conduct a visual experiment trial involving fourteen readers including nine radiologists with different levels of expertise and five GE Healthcare experts in mammography. According to specific visual criteria, the results show the outperformance of our proposed reconstruction approach over the standard non-regularized least squares solution
El, Bitar Ziad. "Optimisation et validation d'un algorithme de reconstruction 3D en tomographie d'émission monophotonique à l'aide de la plateforme de simulation GATE." Clermont-Ferrand 2, 2006. http://www.theses.fr/2006CLF21704.
Full textZhang, Naiyu. "Cellular GPU Models to Euclidean Optimization Problems : Applications from Stereo Matching to Structured Adaptive Meshing and Traveling Salesman Problem." Thesis, Belfort-Montbéliard, 2013. http://www.theses.fr/2013BELF0215/document.
Full textThe work presented in this PhD studies and proposes cellular computation parallel models able to address different types of NP-hard optimization problems defined in the Euclidean space, and their implementation on the Graphics Processing Unit (GPU) platform. The goal is to allow both dealing with large size problems and provide substantial acceleration factors by massive parallelism. The field of applications concerns vehicle embedded systems for stereovision as well as transportation problems in the plane, as vehicle routing problems. The main characteristic of the cellular model is that it decomposes the plane into an appropriate number of cellular units, each responsible of a constant part of the input data, and such that each cell corresponds to a single processing unit. Hence, the number of processing units and required memory are with linear increasing relationship to the optimization problem size, which makes the model able to deal with very large size problems.The effectiveness of the proposed cellular models has been tested on the GPU parallel platform on four applications. The first application is a stereo-matching problem. It concerns color stereovision. The problem input is a stereo image pair, and the output a disparity map that represents depths in the 3D scene. The goal is to implement and compare GPU/CPU winner-takes-all local dense stereo-matching methods dealing with CFA (color filter array) image pairs. The second application focuses on the possible GPU improvements able to reach near real-time stereo-matching computation. The third and fourth applications deal with a cellular GPU implementation of the self-organizing map neural network in the plane. The third application concerns structured mesh generation according to the disparity map to allow 3D surface compressed representation. Then, the fourth application is to address large size Euclidean traveling salesman problems (TSP) with up to 33708 cities.In all applications, GPU implementations allow substantial acceleration factors over CPU versions, as the problem size increases and for similar or higher quality results. The GPU speedup factor over CPU was of 20 times faster for the CFA image pairs, but GPU computation time is about 0.2s for a small image pair from Middlebury database. The near real-time stereovision algorithm takes about 0.017s for a small image pair, which is one of the fastest records in the Middlebury benchmark with moderate quality. The structured mesh generation is evaluated on Middlebury data set to gauge the GPU acceleration factor and quality obtained. The acceleration factor for the GPU parallel self-organizing map over the CPU version, on the largest TSP problem with 33708 cities, is of 30 times faster
Shabou, Aymen. "Minimisation multi-étiquette d'énergies markoviennes par coupe-minimum sur graphe : application à la reconstruction de la phase interférométrique en imagerie RSO." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00565362.
Full textShabou, Aymen. "Minimisation multi-étiquette d'énergies markoviennes par coupe-minimum sur graphe : application à la reconstruction de la phase interférométrique en imagerie RSO." Phd thesis, Paris, Télécom ParisTech, 2010. https://pastel.hal.science/pastel-00565362.
Full textThe MRF in computer vision and image analysis is a powerful framework for solving complex problems using the MAP estimation. However, some image processing problems deal with high dimensional data and require non-convex MRF energy functions. Thus, optimization process becomes a hard task. The first contribution of this thesis is developing new efficient optimization algorithms for the class of the first order multi-label MRF energies with any likelihood terms and convex prior. The proposed algorithms rely on the graph-cut technique, and iterative strategies based on large and multi-label partition moves. These algorithms offer a trade-off between optimum quality and algorithm complexity. The main application of this work is the digital elevation model (DEM) estimation using interferometric synthetic aperture radar (InSAR) data. This problem is usually considered as a complex and an ill-posed one, because of the high noise rate affecting interferograms and the complex structures qualifying real natural and urban area. So, the second contribution of this work is developing new MRF models relying on the multichannel interferometric likelihood density function and the total variation regularization. Appropriate optimization algorithms are then proposed. The new approach is applied to synthetic and real InSAR data showing better performances than the state of the art techniques
Goëffon, Adrien. "Nouvelles heuristiques de voisinage et mémétiques pour le problème Maximum de Parcimonie." Phd thesis, Université d'Angers, 2006. http://tel.archives-ouvertes.fr/tel-00256670.
Full textDans cette thèse, nous nous intéressons en premier lieu à l'amélioration des techniques de résolution du problème MP basées sur un algorithme de descente. Après avoir montré de manière empirique les limites des voisinages existants, nous introduisons un voisinage progressif qui évolue au cours de la recherche afin de limiter l'évaluation de voisins infructueux lors d'une descente. L'algorithme obtenu est ensuite hybridé à un algorithme génétique utilisant un croisement d'arbres spécifique fondé sur les mesures de distance entre chaque couple d'espèces dans l'arbre. Cet algorithme mémétique exhibe des résultats très compétitifs, tant sur des jeux de test tirés de la littérature que sur des jeux générés aléatoirement.
Carquet, Marie. "Approches combinatoires pour la reconstruction d'une voie de biosynthèse chez la levure : variation des niveaux d'expression et analyse fonctionnelle d'une étape clé de la voie." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0024/document.
Full textTo optimize the production of a value added compound while avoiding toxic consequences on the cell viability, a challenge in the metabolic engineering field is to balance the endogenous metabolic fluxes and the newly consumed fluxes. In this optimization context, combinatorial strategies can generate several variants of synthetic metabolic pathways. This strategy gives precious strategic information on the right combinations of function and regulation choices to be made in the ultimate pathway reconstruction. Our study aimed at the production of the molecules responsible for aroma, dye, and fragrance of saffron (Crocus sativus) in Saccharomyces cerevisiae. A combinatorial approach was chosen to modulate expression levels of three genes involved in their common precursor biosynthesis: zeaxanthin. This strategy allowed us to describe some unexpected bias in the regulation of the plasmid-encoded genes expression levels. We detected strong transcriptional interference between the different genes in our system, and the ORF nature also seemed to influence the expression levels. These critical factors imposed a stronger regulation of the three genes expression levels than the promoter strength initially chosen to control them. The project was continued toward its final objective by making a detailed functional analysis of the zeaxanthin cleavage reaction leading to the molecules of interest synthesis. This reaction was described to be catalyzed by a specific enzyme, but no activity was observed in our experiments. This result led us to propose some tools to reach the final goal of the project
Klaine, Luc. "Filtrage et restauration myopes des images numériques." Rennes 1, 2004. http://www.theses.fr/2004REN10161.
Full textGalindo, Patricio A. "Image matching for 3D reconstruction using complementary optical and geometric information." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0007/document.
Full textAbstractImage matching is a central research topic in computer vision which has been mainly focused on optical aspects. The aim of the work presented herein consists in the direct use of geometry to complement optical information in the tasks of 2D matching. First, we focus on global methods based on the calculus of variations. In such methods occlusions and sharp features raise difficult challenges. In these scenarios only the contribution of the regularizer accounts for results. Based on a geometric characterization of this behaviour, we formulate a variational matching method that steers grid lines away from problematic regions. While variational methods provide well behaved results, local methods based on match propagation provide results that adapt closely to varying 3D structures although choppy in nature. Therefore, we present a novel method to propagate matches using local information about surface regularity correcting 3D positions along with corresponding 2D matchings
Soubies, Emmanuel. "Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4082/document.
Full textThis thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
Rani, Kaddour. "Stratégies d’optimisation des protocoles en scanographie pédiatrique." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0282/document.
Full textFor the last 10-years, computed tomography (CT) procedures and their increased use have been a major source for concern in the scientific community. This concern has been the starting point for several studies aiming to optimize the dose while maintaining a diagnostic image quality. In addition, it is important to pay special attention to dose levels for children (age range considered to be from a newborn baby to a 16-y-old patient). Indeed, children are more sensitive to ionizing radiations, and they have a longer life expectancy. Optimizing the CT protocols is a very difficult process due to the complexity of the acquisition parameters, starting with the individual patient characteristics, taking into account the available CT device and the required diagnostic image quality. This PhD project is contributing to the advancement of knowledge by: (1) Developing a new approach that can minimize the number of testing CT scans examinations while developing a predictive mathematical model allowing radiologists to prospectively anticipate how changes in protocols will affect the image quality and the delivered dose for four models of CT scan. (2) Setting-up a Generic Optimized Protocol (based on the size of the phantom CATPAHN 600) for four models of CT scan. (3) Developing a methodology to adapt the GOP to five sizes of pediatric patient using Size Specific Dose Estimate calculation (SSDE). (4) Evaluating subjective and objective image quality between size-based optimised CT protocol and age-based CT protocols. (5) Developing a CT protocol optimization tool and a tutorial helping the radiologists in the process of optimization
Couprie, Camille. "Graph-based variational optimization and applications in computer vision." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00666878.
Full textEl, Gueddari Loubna. "Proximal structured sparsity regularization for online reconstruction in high-resolution accelerated Magnetic Resonance imaging." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS573.
Full textMagnetic resonance imaging (MRI) is the reference medical imaging technique for probing in vivo and non-invasively soft tissues in the human body, notably the brain. MR image resolution improvement in a standard scanning time (e.g., 400µm isotropic in 15 min) would allow medical doctors to significantly improve both their diagnosis and patients' follow-up. However the scanning time in MRI remains long, especially in the high resolution context. To reduce this time, the recent Compressed Sensing (CS) theory has revolutionized the way of acquiring data in several fields including MRI by overcoming the Shannon-Nyquist theorem. Using CS, data can then be massively under-sampled while ensuring conditions for optimal image recovery.In this context, previous Ph.D. thesis in the laboratory were dedicated to the design and implementation of physically plausible acquisition scenarios to accelerate the scan. Those projects deliver new optimization algorithm for the design of advanced non-Cartesian trajectory called SPARKLING: Spreading Projection Algorithm for Rapid K-space samplING. The generated SPARKLING trajectories led to acceleration factors up to 20 in 2D and 60 for 3D-acquisitions on highly resolved T₂* weighted images acquired at 7~Tesla.Those accelerations were only accessible thanks to the high input Signal-to-Noise Ratio delivered by the usage of multi-channel reception coils. However, those results are coming at a price of long and complex reconstruction.In this thesis, the objective is to propose an online approach for non-Cartesian multi-channel MR image reconstruction. To achieve this goal we rely on an online approach where the reconstruction starts from incomplete data.Hence acquisition and reconstruction are interleaved, and partial feedback is given during the scan. After exposing the Compressed Sensing theory, we present state-of the art method dedicated to multi-channel coil reconstruction. In particular, we will first focus on self-calibrating methods that presents the advantage to be adapted to non-Cartesian sampling and we propose a simple yet efficient method to estimate the coil sensitivity profile.However, owing to its dependence to user-defined parameters, this two-step approach (extraction of sensitivity maps and then image reconstruction) is not compatible with the timing constraints associated with online reconstruction. Then we studied the case of calibration-less reconstruction methods and splits them into two categories, the k-space based and the domain-based. While the k-space calibration-less method are sub-optimal for non-Cartesian reconstruction, due to the gridding procedure, we will retain the domain-based calibration-less reconstruction and prove theirs for online purposes. Hence in the second part, we first prove the advantage of mixed norm to improve the recovery guarantee in the pMRI setting. Then we studied the impact of structured sparse induced norm on the reconstruction multi-channel purposes, where then and adapt different penalty based on structured sparsity to handle those highly correlated images. Finally, the retained method will be applied to online purposes. The entire pipeline, is compatible with an implementation through the Gadgetron pipeline to deliver the reconstruction at the scanner console
Mourya, Rahul Kumar. "Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES005/document.
Full textDegradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
Labat, Christian. "Algorithmes d'optimisation de critères pénalisés pour la restauration d'images : application à la déconvolution de trains d'impulsions en imagerie ultrasonore." Phd thesis, Ecole centrale de nantes - ECN, 2006. http://tel.archives-ouvertes.fr/tel-00132861.
Full text- Démontrer la convergence des algorithmes SQ approchés et GCNL+SQ1D.
- Etablir des liens forts entre les algorithmes SQ approchés et GCNL+SQ1D.
- Illustrer expérimentalement en déconvolution d'images le fait que les algorithmes SQ approchés et GCNL+SQ1D sont préférables aux algorithmes SQ exacts.
- Appliquer l'approche pénalisée à un problème de déconvolution d'images en contrôle non destructif par ultrasons.
Repetti, Audrey. "Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1032/document.
Full textAn efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
Jezierska, Anna Maria. "Image restoration in the presence of Poisson-Gaussian noise." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00906718.
Full textKaaniche, Mounir. "Schémas de lifting vectoriels adaptatifs et applications à la compression d'images stéréoscopiques." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00631357.
Full textChaux, Caroline. "Analyse en ondelettes M-bandes en arbre dual : application à la restauration d'images." Phd thesis, Université de Marne la Vallée, 2006. http://tel.archives-ouvertes.fr/tel-00714292.
Full textCarlavan, Mikaël. "Optimization of the compression/restoration chain for satellite images." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00847182.
Full textDuan, Liuyun. "Modélisation géométrique de scènes urbaines par imagerie satellitaire." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4025.
Full textAutomatic city modeling from satellite imagery is one of the biggest challenges in urban reconstruction. The ultimate goal is to produce compact and accurate 3D city models that benefit many application fields such as urban planning, telecommunications and disaster management. Compared with aerial acquisition, satellite imagery provides appealing advantages such as low acquisition cost, worldwide coverage and high collection frequency. However, satellite context also imposes a set of technical constraints as a lower pixel resolution and a wider that challenge 3D city reconstruction. In this PhD thesis, we present a set of methodological tools for generating compact, semantically-aware and geometrically accurate 3D city models from stereo pairs of satellite images. The proposed pipeline relies on two key ingredients. First, geometry and semantics are retrieved simultaneously providing robust handling of occlusion areas and low image quality. Second, it operates at the scale of geometric atomic regions which allows the shape of urban objects to be well preserved, with a gain in scalability and efficiency. Images are first decomposed into convex polygons that capture geometric details via Voronoi diagram. Semantic classes, elevations, and 3D geometric shapes are then retrieved in a joint classification and reconstruction process operating on polygons. Experimental results on various cities around the world show the robustness, scalability and efficiency of the proposed approach
Glaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.
Full textSeveral apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems