Dissertationen zum Thema „Rehaussement des images“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-17 Dissertationen für die Forschung zum Thema "Rehaussement des images" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Cherifi, Daikha. „Amélioration et évaluation de la qualité des images“. Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132053.
Der volle Inhalt der QuelleThe aim of this thesis is to propose new methods for image enhancement based on oriented and multi-scale transforms using some perceptual criteria. The first part of the thesis is devoted to the development of a simple and efficient contrast enhancement method inspired from the human visual system. This method is evaluated on a set of natural color and monochrome images. The obtained results are evaluated subjectively and by using objective measures based on energy spectrum analysis and perceptual criteria. The enhancement technique is also extended to some medical images, such as mammography and endoscopy images. A special contrast enhancement method adapted to mammography is then proposed. It is based on a segmentation process using a priori information on the mammography images. The last part of the thesis is devoted to image enhancement evaluation. A critical literature survey of image enhancement evaluation methods is provided. The evaluation method proposed in this thesis is based on the radial and angular analysis of the Fourier powerspectrum. Another perceptual approach is proposed to evaluate the output. This method is based on the analysis of the visibility map computed by using a pyramidal contrast. The evaluation is performed on some samples taken from two databases. Both subjective and objective evaluations demonstrate the efficiency of the proposed image enhancement methods
Thaibaoui, Abdelhakim. „Segmentation floue d'images fondee sur une methode optimale de rehaussement de contraste : application a la detection des drusen sur des images d'angiographie retinienne“. Paris 12, 1999. http://www.theses.fr/1999PA120062.
Der volle Inhalt der QuelleIrrera, Paolo. „Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images“. Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0031/document.
Der volle Inhalt der QuelleWe aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
Irrera, Paolo. „Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images“. Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0031.
Der volle Inhalt der QuelleWe aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
Wang, Qian. „Processing and exploration of CT images for the assessment of aortic valve bioprostheses“. Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925743.
Der volle Inhalt der QuelleGoswami, Abhishek. „Content-aware HDR tone mapping algorithms“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG013.
Der volle Inhalt der QuelleThe ratio between the brightest and the darkest luminance intensity in High Dynamic Range (HDR) images is larger than the rendering capability of the output media. Tone mapping operators (TMOs) compress the HDR image while preserving the perceptual cues thereby modifying the subjective aesthetic quality. Age old painting and photography techniques of manual exposure correction has inspired a lot of research for TMOs. However, unlike the manual retouching process based on semantic content of the image, TMOs in literature have mostly relied upon photographic rules or adaptation principles of human vision to aim for the 'best' aesthetic quality which is ill-posed due to its subjectivity. Our work reformulates the challenges of tone mapping by stepping into the shoes of a photographer, following the photographic principles, image statistics and their local retouching recipe to achieve the tonal adjustments. In this thesis, we present two semantic aware TMOs – a traditional SemanticTMO and a deep learning-based GSemTMO. Our novel TMOs explicitly use semantic information in the tone mapping pipeline. Our novel GSemTMO is the first instance of graph convolutional networks (GCN) being used for aesthetic image enhancement. We show that graph-based learning can leverage the spatial arrangement of semantic segments like the local masks made by experts. It creates a scene understanding based on the semantic specific image statistics a predicts a dynamic local tone mapping. Comparing our results to traditional and modern deep learning-based TMOs, we show that G-SemTMO can emulate an expert’s recipe and reach closer to reference aesthetic styles than the state-of-the-art methods
Ludusan, Cosmin. „De la restauration d'image au rehaussement : formalisme EDP pour la fusion d'images bruitées“. Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14400/document.
Der volle Inhalt der QuelleThis thesis addresses key issues of current image restoration and enhancement methodology, and through a progressive approach introduces two new image processing paradigms, i.e., concurrent image deblurring and denoising with coherence enhancement, and joint image fusion and denoising, defined within a Partial Differential Equation -variational theoretical setting.The first image processing paradigm represents an intermediary step in validating and testing the concept of compound image restoration and enhancement, while the second proposition, i.e., the joint fusion-denoising model fully illustrates the advantages of using concurrent image processing as opposed to sequential approaches.Both propositions are theoretically formalized and experimentally analyzed and compared with the similar existing methodology, proving thus their validity and emphasizing their characteristics and advantages when considered an alternative to a sequential image processing chain
Hessel, Charles. „La décomposition automatique d'une image en base et détail : Application au rehaussement de contraste“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLN017/document.
Der volle Inhalt der QuelleIn this CIFRE thesis, a collaboration between the Center of Mathematics and their Applications, École Normale Supérieure de Cachan and the company DxO, we tackle the problem of the additive decomposition of an image into base and detail. Such a decomposition is a fundamental tool in image processing. For applications to professional photo editing in DxO Photolab, a core requirement is the absence of artifacts. For instance, in the context of contrast enhancement, in which the base is reduced and the detail increased, minor artifacts becomes highly visible. The distortions thus introduced are unacceptable from the point of view of a photographer.The objective of this thesis is to single out and study the most suitable filters to perform this task, to improve the best ones and to define new ones. This requires a rigorous measure of the quality of the base plus detail decomposition. We examine two classic artifacts (halo and staircasing) and discover three more sorts that are equally crucial: contrast halo, compartmentalization, and the dark halo. This leads us to construct five adapted patterns to measure these artifacts. We end up ranking the optimal filters based on these measurements, and arrive at a clear decision about the best filters. Two filters stand out, including one we propose
Lavialle, Olivier. „Diffusion et fusion directionnelles pour le lissage et le rehaussement de structures fortement orientées“. Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2007. http://tel.archives-ouvertes.fr/tel-00181793.
Der volle Inhalt der QuelleLa présentation est organisée en trois chapitres :
Dans le premier chapitre, je présente des méthodes non linéaires et adaptatives qui conduisent à de faibles modifications topologiques du signal utile. L'amélioration des images est obtenue par des approches originales permettant d'accentuer de manière uniforme et indépendante les variations locales de contraste et les structures unidimensionnelles avec ou sans rehaussement de contours. Nous présenterons en particulier deux types d'approches, l'une scalaire et l'autre tensorielle.
Ces travaux ont été initiés lors de la thèse de Romulus Terebes. Ils ont fait l'objet de deux publications en revue et de neuf communications.
Dans le deuxième chapitre, nous adaptons ces méthodes aux spécificités de l'imagerie sismique en rapportant à la fois les travaux développés dans le cadre des thèses de Régis Dargent et de Sorin Pop. Outre l'extension au cas 3D, la problématique abordée a conduit à développer des approches structuralistes qui prennent en compte certaines informations a priori telles que la géométrie des failles. Là encore, nous présentons deux approches : la première relève tout autant du filtrage adaptatif que de la diffusion anisotrope, la seconde étant totalement inspirée d'une approche tensorielle.
Les travaux développés autour de la diffusion directionnelle pour la sismique ont fait l'objet de deux publications et de quatre communications.
Enfin, la troisième partie présente une extension originale des EDP : dans le cadre des travaux de thèse de Sorin Pop, nous proposons l'utilisation d'une formulation à base d'Equations aux Dérivées Partielles pour mener conjointement une procédure de fusion d'image et une procédure de diffusion. Cette approche permet, à partir de plusieurs sources bruitées, l'obtention d'une sortie fusionnée et lissée. Ces travaux, qui sont encore à l'heure actuelle en cours de développement concernent à la fois des applications en fusion d'images rencontrées classiquement dans la littérature mais également une application 3D très particulière et assez nouvelle : la sismique azimutale. Nous en développerons les grands principes.
En 2007, ces travaux ont fait l'objet d'une communication dans 3 conférences internationales, d'un article accepté ; un article est en préparation.
Li, Gengxiang. „Rehaussement et détection des attributs sismiques 3D par techniques avancées d'analyse d'images“. Phd thesis, Université Michel de Montaigne - Bordeaux III, 2012. http://tel.archives-ouvertes.fr/tel-00731886.
Der volle Inhalt der QuelleMadi, Abdeldjalil. „Contrôle automatique des conditions d'affichage d'une image par un projecteur“. Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6592.
Der volle Inhalt der QuelleBismuth, Vincent. „Image processing algorithms for the visualization of interventional devices in X-ray fluoroscopy“. Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1062/document.
Der volle Inhalt der QuelleStent implantation is the most common treatment of coronary heart disease, one of the major causes of death worldwide. During a stenting procedure, the clinician inserts interventional devices inside the patient's vasculature. The navigation of the devices inside the patient's anatomy is monitored in real-time, under X-ray fluoroscopy. Three specific interventional devices play a key role in this procedure: the guide-wire, the angioplasty balloon and the stent. The guide-wire appears in the images as a thin curvilinear structure. The angioplasty balloon, that has two characteristic markerballs at its extremities, is mounted on the guide-wire. The stent is a 3D metallic mesh, whose appearance is complex in the fluoroscopic images. Stents are barely visible, but the proper assessment of their deployment is key to the procedure. The objective of the work presented in this thesis is twofold. On the first hand, we aim at designing, studying and validating image processing techniques that improve the visualization of stents. On the second hand, we study the processing of curvilinear structures (like guide-wires) for which we propose a new image processing technique. We present algorithms dedicated to the 2D and 3D visualization of stents. Since the stent is hardly visible, we do not intend to directly locate it by image processing means in the images. The position and motion of the stent are inferred from the location of two landmarks: the angioplasty balloon and of the guide-wire, which have characteristic shapes. To this aim, we perform automated detection, tracking and registration of these landmarks. The cornerstone of our 2D stent visualization enhancement technique is the use of the landmarks to perform motion compensated noise reduction. We evaluated the performance of this technique for 2D stent visualization over a large database of clinical data (nearly 200 cases). The results demonstrate that our method outperforms previous state of the art techniques in terms of image quality. A comprehensive validation confirmed that we reached the level of performance required for the commercial introduction of our algorithm. It is currently deployed in a large number of clinical sites worldwide. The 3D stent visualization that we propose, uses the landmarks to achieve motion compensated tomographic reconstruction. We show preliminary results over 22 clinical cases. Our method seems to outperform previous state of the art techniques both in terms of automation and image quality. The previous stent visualization methods involve the segmentation of the part of the guide-wire extending through the stent. We propose a generic tool to process such curvilinear structures that we call the Polygonal Path Image (PPI). The PPI relies on the concept of locally optimal paths. One of its main advantages is that it unifies the concepts of several previous state of the art techniques in a single formalism. Moreover the PPI enables to control the smoothness and the length of the structures to segment. Its parametrization is simple and intuitive. In order to fully benefit from the PPI, we propose an efficient scheme to compute it. We demonstrate its applicability for the task of automated guide-wire segmentation, for which it outperforms previous state of the art techniques
Louis, Jean-Sébastien. „Développements en IRM quantitative de perfusion pour le diagnostic de fibrose myocardique“. Thesis, Université de Lorraine, 2020. http://www.theses.fr/2020LORR0061.
Der volle Inhalt der QuelleHeart failure represents a major public health issue in western world. It is a complex syndrome that could be the cause and/or the consequence of underlying pathologies such interstitial diffuse fibrosis. Magnetic Resonance Imaging (MRI) is the reference imaging modality for soft tissue assessment and especially the myocardium. Several imaging biomarkers such relaxation time T1 or extracellular volume fraction (ECV) have proven their diagnostic power in term of sensitivity and specificity. MRI with contrast agent injection has also demonstrated its usefulness in diagnostic of post-infarct local fibrosis for instance. Dynamic contrast enhancement (DCE) is widely investigated for its supposed ability to discriminate areas from which perfusion/permeability properties have been altered by the presence of fibrosis deposition. We hypothesized that the quantification of myocardial permeability and the estimation of the extracellular extravascular volume fraction Ve could led to a better detection of diffuse fibrosis. Consequently, we investigated the possibility of a shorter protocol for the evaluation of ECV. In this manuscript, we first present the methodological developments that allow the quantitative analysis of DCE cardiac MRI. This implied the development of a post-processing method for Arterial Input Function reconstruction, allowing DCE quantification without the need of specific sequences or protocols. A post-processing algorithm for perfusion images registration have been developed for pixel-wise parametric maps reconstruction. Data acquisition have been simulated in a Monte-Carlo fashion in order to assess the impact of acquisition strategies on parameters accuracy. This eventually led to the design of the shortest possible imaging strategy for Ve quantification. Secondly, clinical results obtained with our quantitative DCE analysis framework have been confronted to those obtained with classical ECV method for diffuse fibrosis detection. Correlation between those two parameters have been found a group of 12 patients presenting mitral valve prolapses. Permutation test on Ve distribution allowed us to show a significant difference between two groups the same way the ECV values did. The presented work describes a full quantitative DCE analysis framework that could allow to a shorter imaging protocol for extracellular extravascular estimation for diffuse myocardial fibrosis diagnosis
Tankyevych, Olena. „Filtering of thin objects : applications to vascular image analysis“. Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00607248.
Der volle Inhalt der QuellePierre, Fabien. „Méthodes variationnelles pour la colorisation d’images, de vidéos, et la correction des couleurs“. Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0250/document.
Der volle Inhalt der QuelleThis thesis deals with problems related to color. In particular, we are interested inproblems which arise in image and video colorization and contrast enhancement. When considering color images composed of two complementary information, oneachromatic (without color) and the other chromatic (in color), the applications studied in this thesis are based on the processing one of these information while preserving its complement. In colorization, the challenge is to compute a color image while constraining its gray-scale channel. Contrast enhancement aims to modify the intensity channel of an image while preserving its hue.These joined problems require to formally study the RGB space geometry. In this work, it has been shown that the classical color spaces of the literature designed to solve these classes of problems lead to errors. An novel algorithm, called luminance-hue specification, which computes a color with a given hue and luminance is described in this thesis. The extension of this method to a variational framework has been proposed. This model has been used successfully to enhance color images, using well-known assumptions about the human visual system. The state-of-the-art methods for image colorization fall into two categories. The first category includes those that diffuse color scribbles drawn by the user (manual colorization). The second consists of those that benefits from a reference color image or a base of reference images to transfer the colors from the reference to the grayscale image (exemplar-based colorization). Both approach have their advantages and drawbacks. In this thesis, we design a variational model for exemplar-based colorization which is extended to a method unifying the manual colorization and the exemplar-based one. Finally, we describe two variational models to colorize videos in interaction with the user
Betancur, Acevedo Julian Andrés. „Intégration d'images multimodales pour la caractérisation de cardiomyopathies hypertrophiques et d'asynchronismes cardiaques“. Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S089/document.
Der volle Inhalt der QuelleThis work concerns cardiac characterization, a major methodological and clinical issue, both to improve disease diagnostic and to optimize its treatment. Multisensor registration and fusion methods are proposed to bring into a common referential data from cardiac magnetic resonance (CMRI), dynamic cardiac X-ray computed tomography (CT), speckle tracking echocardiography (STE) and electro-anatomical mappings of the inner left ventricular chamber (EAM). These data is used to describe the heart by its anatomy, electrical and mechanical function, and the state of the myocardial tissue. The methods proposed to register the multimodal datasets rely on two main processes: temporal registration and spatial registration. The temporal dimensions of input data (images) are warped with an adaptive dynamic time warping (ADTW) method. This method allowed to handle the nonlinear temporal relationship between the different acquisitions. Concerning the spatial registration, iconic methods were developed, on the one hand, to correct for motion artifacts in cine acquisition, to register cine-CMRI and late gadolinium CMRI (LGE-CMRI), and to register cine-CMRI with dynamic CT. On the other hand, a contour-based method developed in a previous work was enhanced to account for multiview STE acquisitions. These methods were evaluated on real data in terms of the best metrics to use and of the accuracy of the iconic methods, and to assess the STE to cine-CMRI registration. The fusion of these multisensor data enabled to get insights about the diseased heart in the context of hypertrophic cardiomyopathy (HCM) and cardiac asynchronism. For HCM, we aimed to improve the understanding of STE by fusing fibrosis from LGE-CMRI with strain from multiview 2D STE. This analysis allowed to assess the significance of regional STE strain as a surrogate of the presence of regional myocardial fibrosis. Concerning cardiac asynchronism, we aimed to describe the intra-segment electro-mechanical coupling of the left ventricle using fused data from STE, EAM, CT and, if relevant, from LGE-CMRI. This feasibility study provided new elements to select the optimal sites for LV stimulation
Bélanger, Jean. „Mise à jour de la Base de Données Topographiques du Québec à l'aide d'images à très haute résolution spatiale et du progiciel Sigma0 : le cas des voies de communication“. Thèse, 2011. http://hdl.handle.net/1866/6319.
Der volle Inhalt der QuelleIn order to optimize and reduce the cost of road map updating, the Ministry of Natural Resources and Wildlife is considering exploiting high definition color aerial photography within a global automatic detection process. In that regard, Montreal based SYNETIX Inc, teamed with the University of Montreal Remote Sensing Laboratory (UMRSL) in the development of an application indented for the automatic detection of road networks on complex radiometric high definition imagery. This application named SIGMA-ROUTES is a derived module of a software called SIGMA0 earlier developed by the UMRSL for optic and radar imagery of 5 to 10 meter resolution. SIGMA-ROUTES road detections relies on a map guided filtering process that enables the filter to be driven along previously known road vectors and tagged them as intact, suspect or lost depending on the filtering responses. As for the new segments updating, the process first implies a detection of potential starting points for new roads within the filtering corridor of previously known road to which they should be connected. In that respect, it is a very challenging task to emulate the human visual filtering process and further distinguish potential starting points of new roads on complex radiometric high definition imagery. In this research, we intend to evaluate the application’s efficiency in terms of total linear distances of detected roads as well as the spatial location of inconsistencies on a 2.8 km2 test site containing 40 km of various road categories in a semi-urban environment. As specific objectives, we first intend to establish the impact of different resolutions of the input imagery and secondly establish the potential gains of enhanced images (segmented and others) in a preemptive approach of better matching the image property with the detection parameters. These results have been compared to a ground truth reference obtained by a conventional visual detection process on the bases of total linear distances and spatial location of detection. The best results with the most efficient combination of resolution and pre-processing have shown a 78% intact detection in accordance to the ground truth reference when applied to a segmented resample image. The impact of image resolution is clearly noted as a change from 84 cm to 210 cm resolution altered the total detected distances of intact roads of around 15%. We also found many roads segments ignored by the process and without detection status although they were directly liked to intact neighbours. By revising the algorithm and optimizing the image pre-processing, we estimate a 90% intact detection performance can be reached. The new segment detection is non conclusive as it generates an uncontrolled networks of false detections throughout other entities in the images. Related to these false detections of new roads, we were able to identify numerous cases of new road detections parallel to previously assigned intact road segments. We conclude with a proposed procedure that involves enhanced images as input combined with human interventions at critical level in order to optimize the final product.