Dissertationen zum Thema „Traitements des images“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Traitements des images" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Comes, Serge. „Les traitements perceptifs d'images numérisées“. Université catholique de Louvain, 1993. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-04232003-110112/.
Der volle Inhalt der QuelleAmat, Jean-Louis. „Automatisation de traitements d'images : application au calcul de paramètres physiques en télédétection“. Nice, 1987. http://www.theses.fr/1987NICE4154.
Der volle Inhalt der QuelleLeducq, Paul. „Traitements temps-fréquence pour l'analyse de scènes complexes dans les images SAR polarimétriques“. Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00133586.
Der volle Inhalt der QuelleLa réponse des cibles mobiles est étudiée. Sa forme particulière conduit à une méthode de détection et refocalisation basée sur la transformée de Fourier fractionnaire. La problématique est étendue au cas des cibles possédant de plus une réflectivité dépendant des paramètres d'illumination (angle et fréquence). Une approche basée sur une modélisation de la cible et sur l'algorithme de Matching-Pursuit est présentée.
La détection des bâtiments dans les images SAR de zones urbaines en bande L est abordée dans le cadre temps-fréquence. Les notions complémentaires de stationnarité et de cohérence sont exploitées pour produire une classification temps-fréquence, permettant d'identifier les environnements naturels et différents types de cibles artificielles. Des applications à la cartographie et à la caractérisation de bâtiments sont proposées.
JAROSZ, HERVE. „Adaptation dynamique des operateurs de detection en traitements d'image : application aux images interferometriques“. Paris 6, 1989. http://www.theses.fr/1989PA066256.
Der volle Inhalt der QuelleXia, Deshen. „Contribution à la mise au point d'un logiciel de traitements d'images de télédetection sur micro-ordinateur“. Rouen, 1987. http://www.theses.fr/1987ROUEL045.
Der volle Inhalt der QuelleSatellite images of the Earth recently enable geographers to considerably extend the field of their observations. But the numerisation, the processing and the visualisation of these images require a solid and appropriate computer system. In collaboration withe agroup of researchers, the programmer have helped in solving the essential problems posed by the nature of the data to be processed. These data comprise about ten million elementary units called pixels. It was initially nacessary to explore deeply the capacities of e system (the micro-comuter system), for this processing. The latter being developped alongside an IBM PC-AT revealed to be very effective owing to its speccialised cards, its flexible operating system, the md-dos and to the combined usage of two programming languages (the C and the assembler). The initial satellite data were available in a magnetic tape whose reading required a programme extablishing the link between the tape controller and the ibm, important problemq were encountered : inadaptation of one of the IBM Bios function, insufficient tape reading speed using the language C and the delicate link between the latter and the assembler. Once read, the data obtained from the tape were printed the Canon PJ-A080 a for the colour images ; the Epson LQ-1500 and the laser writer MO 156Z for the black and white images while other programmes enabled their visualisation. The laser writer, particularly ets in-built micro-processor gave very satisfactory results. To enable the printing of the results, the image was made to undergo different processing in order to improve ets appearance and to refine ets contracts : by the analyses of frequency histogram, classification, stretching and line-removing. It's possible to make e good interpretation of an erea with Earth satellite image, an appropriate micro-computer system and somme printed images
Delahaies, Agnès. „Contributions à des approches informationnelles en imagerie : traitements conjoints et résonance stochastique“. Phd thesis, Université d'Angers, 2011. http://tel.archives-ouvertes.fr/tel-00665908.
Der volle Inhalt der QuelleDelahaies, Agnès. „Contributions à des approches informationnelles en imagerie : traitements conjoints et résonance stochastique“. Phd thesis, Angers, 2011. https://theses.hal.science/tel-00665908.
Der volle Inhalt der QuelleImaging systems are continuously improving and their uses are spreading more and more widely. Imaging systems are based on various physical principles, with a sophistication which keeps enhancing (magnetic resonance imaging, thermography, multi and hyperspectral imaging). Beyond this heterogeneity of constitution, the resulting images share the property of being a support of information. In this context, we propose a contribution to informational approaches in imaging. This is especially guided by a transposition of Shannon's informational paradigm to imaging along two main directions. We present a joint-processing approach where the informational goal of the acquired images is a prior knowledge which is exploited in order to optimize some tuning configurations of the imaging systems. Different joint-processing problems are examined (joint observation scale - estimation, joint compression - estimation, and joint acquisition - compression). We then extend the field of stochastic resonance studies by exploring some new signal-noise mixtures enabling useful noise effects, in coherent imaging and in magnetic resonance imaging. Stochastic resonance is also considered for its specific informational significance (the noise useful to information), as a phenomenon allowing to test and further assess the properties and potentialities of entropic or informational measures applied to imaging. Stochastic resonance is especially used as a benchmark to confront such informational measures to psychovisual measures on images
Ding, Yi. „Amélioration de la qualité visuelle et de la robustesse aux post-traitements des images comprimées par une approche de type "problème inverse"“. Lyon, INSA, 1996. http://www.theses.fr/1996ISAL0086.
Der volle Inhalt der QuelleLow bit-rate can only be attained by lossy compression techniques. Most of the lossy compression schemes proceed by blocks at the transformation or at the quantization stage. The usual decompression algorithm is a symmetric inverse processing of the compression scheme. This generates blocking artifacts and ringing noise. Furthermore, decompressed image can never support post-processing such as edge detection. In this work, we propose a decompression scheme based on the theory of inverse problems. The decompression restores the compressed image with some constraints based on information about both the original image and on the compression procedure: the image smoothness and the upper bound of the quantization error, respectively. We consider an extension of the regularized mean square approach for ill-posed problems proposed by Miller. This idea is first carried out for the JPEG algorithm. We proposed two schemes. In the first one the dequantization array is calculated in order to minimize the reconstruction error subjected to a mean regularity constraint on the whole image blacks. This scheme is in full compliance with the JPEG standard. In the second approach the original image is pre-filtered by an unsharp masking filter in order to enhance the image details before compression and post-filtered by a low-pass inverse filter after decompression to remove the blocks. The inverse filter is designed for an optimal restoration subjected to both a constraint on the image roughness and on the decompression error. This second technique is more efficient than the first one against blocking artifacts. It is also in full compliance with the JPEG standard but requires two additional processing components. The robustness of the decompressed image to edge detection was assessed for both proposed schemes. We also proposed an algorithm which adapts the block size to the correlation length of the image in JPEG and which optimizes the coefficients of the quantization array. This approach is very efficient for medical images. The regularized restoration method is also applied to the subband coding techniques that use vector quantization of the subband images. Two approaches are considered. In the first one the image is restored at each resolution level and in the second one a global restoration is applied. Experimental results prove that both methods significantly reduce blocking effects and preserve the edges of compressed images. In order to complete the research, we compare the performance of our proposal to a non-linear approach which is adapted to the attenuation of ringing noise
Younan, Fouad. „Reconstruction de la dose absorbée in vivo en 3D pour les traitements RCMI et arcthérapie à l'aide des images EPID de transit“. Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30333/document.
Der volle Inhalt der QuelleThis thesis aims at the dosimetry of high energy photon beams delivered to the patient during an external radiation therapy treatment. The objective of this work is to use EPID the Electronic Portal Imaging Device (EPID) in order to verify that the 3D absorbed dose distribution in the patient is consistent with the calculation performed on the Treatment Planning System (TPS). The acquisition is carried out in continuous mode with the aS-1200 amorphous silicon detector embedded on the TrueBeam STx machine (VARIAN Medical system, Palo Alto, USA) for 10MV photons with a 600 UM.min-1 dose rate. The source-detector distance (SDD) is 150 cm. After correction of the defective pixels, a calibration step is performed to convert the signal into an absorbed dose in water via a response function. Correction kernels are also used to take into account the difference in materials between EPID and water and to correct penumbra. A first model of backprojection was performed to reconstruct the absorbed dose distribution in a homogeneous medium by taking into account several phenomena: the scattered photons coming from the phantom to the EPID, the attenuation of the beams, the diffusion into the phantom, the build-up, and the effect of beam hardening with depth. The reconstructed dose is compared to the one calculated by the TPS with global gamma analysis (3% as the maximum dose difference criteria and 3mm as the distance to agreement criteria). The algorithm was tested on a homogeneous cylindrical phantom and a pelvis phantom for Intensity-Modulated Radiation Therapy (IMRT) and (Volumetric Arc Therapy (VMAT) technics. The model was then refined to take into account the heterogeneities in the medium by using radiological distances in a new dosimetrical approach better known as "in aqua vivo" (1). It has been tested on a thorax phantom and, in vivo on 10 patients treated for a prostate tumor from VMAT fields. Finally, the in aqua model was tested on the thorax phantom before and after making some modifications to evaluate the possibility of detecting errors that could affect the correct delivery of the dose to the patient. [...]
Azzabou, Noura. „Variable Bandwidth Image Models for Texture-Preserving Enhancement of Natural Images“. Paris Est, 2008. http://pastel.paristech.org/4041/01/ThesisNouraAzzabou.pdf.
Der volle Inhalt der QuelleThis thesis is devoted to image enhancement and texture preservation issues. This task involves an image model that describes the characteristics of the recovered signal. Such a model is based on the definition of the pixels interaction that is often characterized by two aspects (i) the photometric similarity between pixels (ii) the spatial distance between them that can be compared to a given scale. The first part of the thesis, introduces novel non-parametric image models towards more appropriate and adaptive image description using variable bandwidth approximations driven from a soft classification in the image. The second part introduces alternative means to model observations dependencies from geometric point of view. This is done through statistical modeling of co-occurrence between observations and the use of multiple hypotheses testing and particle filters. The last part is devoted to novel adaptive means for spatial bandwidth selection and more efficient tools to capture photometric relationships between observations. The thesis concludes with providing other application fields of the last technique towards proving its flexibility toward various problem requirements
Assadzadeh, Djafar. „Traitement des images échographiques“. Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37595555x.
Der volle Inhalt der QuelleAssadzadeh, Djafar. „Traitement des images échographiques“. Paris 13, 1986. http://www.theses.fr/1986PA132013.
Der volle Inhalt der QuelleYao, Yijun. „Exploration d’un équipement d'observation non intrusif pour la compréhension des processus de projection thermique“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCA025.
Der volle Inhalt der QuelleThe fourth industrial revolution ushered in a new technological era characterized by digitalization and intelligence. In this context, there is a growing tendency to combine traditional technologies with more modern information technologies. This approach is opening up a new avenue of interpretation for scientific research.In the context of this study, which is specific to thermal spraying, the work involved using a non-invasive display device to collect on-line images of a jet seeded with powder particles. Processing these images using a specially developed algorithm resulted in the extraction of relevant and reliable data on the construction processes of a spray coating.Indeed, thermal spraying, as a traditional technology in the field of surface treatments, is also a very promising technique in the field of additive manufacturing. The coatings produced by this method have excellent properties and are widely applied in a variety of sectors. It therefore seems important to change the paradigm by incorporating computer technologies.The experiments carried out enabled us to observe the phenomena/processes involved in the plasma spraying of alumina particles, and an algorithm was developed to extract the interesting data contained in the images observed (size distribution of the flying particles, growth pattern of the coating on the substrate, deposition efficiency, etc.). In this way, it was possible to study the particle velocity and flight angle distributions throughout the plasma spraying process.Subsequently, validation of the observation technique and the algorithm applied to plasma spraying made it possible to study the existing cold spraying process. In situ observation of copper particles was therefore carried out to identify the stacking process of cold-sprayed layers and to quantify the size and dispersion of the particles forming the deposit. The study also combined different characterization methods to understand the process of layer stacking during cold spraying
Dawoud, Galal Mouawad. „Système imageur panoramique infrarouge : étude et conception, acquisition et traitement numérique des images“. Toulouse, ENSAE, 1993. http://www.theses.fr/1993ESAE0006.
Der volle Inhalt der QuelleCosimi, Julien. „Caractérisations d'un jet de plasma froid d'hélium à pression atmosphérique“. Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30136.
Der volle Inhalt der QuelleCold atmospheric pressure plasma jets are a subject of great interest in many biomedical fields for the past decade. In the various applications of these jets, the plasma generated can interact with many types of surfaces. Plasma jets influence the treated surfaces, but it is now well known that the treated surface also influences the plasma according to their characteristics. The work carried out in this thesis therefore aims to characterize a cold helium atmospheric pressure plasma jet in contact with three surfaces (dielectric, metallic and ultrapure water) by means of different electrical and optical diagnostics in order to understand the influence of the nature of the surfaces on the physical properties of the plasma and the chemical species generated. The first part of this thesis is focused on the study of the influence of surfaces on the plasma jet. Different parameters are studied, such as the nature of treated surfaces, the gas flow, the distance between the outlet of the device and the surface or the composition of the injected gas. For this purpose, helium flow at the outlet of the device is followed by Schlieren imagery with and without the discharge. Emission spectroscopy is used to determine the emissive species generated by the plasma. ICCD imagery is employed to follow the generation and the propagation of the discharge and the distribution of several excited species in the jet by using band-pass interference filters. A dielectric target causes the ionization wave to spread over its surface and a conductive target leads to the formation of a conduction channel. The evolution of excited species densities (OH*, N2*, He* and O*) increases with the relative permittivity of the treated surface. As well known, active species generated by plasma jets play a fundamental role in the kinetics and the chemistry of the mechanisms linked to plasma processes. The second part of the present work therefore relates to the spatial and temporal evaluation of the densities of the hydroxyl radical OH which plays a major role in many cellular mechanisms. The spatial mapping and the temporal evolution of the absolute and relative densities of OH are obtained by LIF and PLIF laser diagnostics. The density of OH generated increases with the electrical conductivity of the treated surface. It can be noted that the OH molecules remain present in the helium channel between two consecutive discharges (several tens of microseconds). Finally, we focus on the production of chemical species in ultrapure water treated with plasma. The influence of different parameters on the concentration of species in the treated water has been studied to optimize the production of chemical species. In experimental conditions, grounding the ultrapure water during treatment increases the concentration of H2O2. Furthermore, the grounding induces a decrease in the NO2- concentration
Roman-Gonzalez, Avid. „Compression Based Analysis of Image Artifacts: Application to Satellite Images“. Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00935029.
Der volle Inhalt der QuelleRousselot, Maxime. „Image quality assessment of High Dynamic Range and Wide Color Gamut images“. Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S034/document.
Der volle Inhalt der QuelleTo improve their ability to display astonishing images, screen technologies have been greatly evolving. For example, the contrast of high dynamic range rendering systems far exceed the capacity of a conventional display. Moreover, a Wide Color gamut display can cover a bigger color space than ever. Assessing the quality of these new content has become an active field of research as classical SDR quality metrics are not adapted. However, state-of-the-art studies often neglect one important image characteristics: chrominances. Indeed, previous databases contain HDR images with a standard gamut thus neglecting the increase of color space due to WCG. Due to their gamut, these databases are less prone to contain chromatic artifacts than WCG content. Moreover, most existing HDR objective quality metrics only consider luminance and are not considering chromatic artifacts. To overcome this problematic, in this thesis, we have created two HDR / WCG databases with annotated subjective scores. We focus on the creation of a realistic chromatic artifacts that can arise during compression. In addition, using these databases, we explore three solutions to create HDR / WCG metrics. First, we propose a method to adapt SDR metrics to HDR / WCG content. Then, we proposed an extension of a well-known HDR metric called HDR-VDP-2. Finally, we create a new metric based on the merger of various quality metric and color features. This last metric presents very good performance to predict quality while being sensitive to chromatic distortion
Lin, Xiangbo. „Knowledge-based image segmentation using deformable registration: application to brain MRI images“. Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001121.pdf.
Der volle Inhalt der QuelleThe research goal of this thesis is a contribution to the intra-modality inter-subject non-rigid medical image registration and the segmentation of 3D brain MRI images in normal case. The well-known Demons non-rigid algorithm is studied, where the image intensities are used as matching features. A new force computation equation is proposed to solve the mismatch problem in some regions. The efficiency is shown through numerous evaluations on simulated and real data. For intensity based inter-subject registration, normalizing the image intensities is important for satisfying the intensity correspondence requirements. A non-rigid registration method combining both intensity and spatial normalizations is proposed. Topology constraints are introduced in the deformable model to preserve an expected property in homeomorphic targets registration. The solution comes from the correction of displacement points with negative Jacobian determinants. Based on the registration, a segmentation method of the internal brain structures is studied. The basic principle is represented by ontology of prior shape knowledge of target internal structure. The shapes are represented by a unified distance map computed from the atlas and the deformed atlas, and then integrated into the similarity metric of the cost function. A balance parameter is used to adjust the contributions of the intensity and shape measures. The influence of different parameters of the method and comparisons with other registration methods were performed. Very good results are obtained on the segmentation of different internal structures of the brain such as central nuclei and hippocampus
Walbron, Amaury. „Analyse rapide d’images 3D de matériaux hétérogènes : identification de la structure des milieux et application à leur caractérisation multi-échelle“. Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2015/document.
Der volle Inhalt der QuelleDigital simulation is a wide-spreading tool for composite materials design and choice. Indeed it allows to generate and test digitally various structures more easily and quickly than with real manufacturing and tests processes. A feedback is needed following the choice and the fabrication of a virtual material in order to simultaneously validate the simulation and the fabrication process. With this aim, models similar to generated virtual structures are obtained by digitization of manufacturing materials. The same simulation algorithms can then be applied and allow to verify the forecasts. This thesis is also about the modelling of composite materials from 3D images, in order to rediscover in them the original virtual material. Image processing methods are applied to images to extract material structure data, i.e. each constituent localization, and orientation if applicable. These knowledge theoretically allow to simulate thermal and mechanical behavior of structures constituted of studied material. However to accurately represent composites requires practically a very small discretization step. Therefore behavior simulation of a macroscopic structure needs too much discretization points, and then time and memory. Hence a part of this thesis focuses also on determination of equivalent homogeneous material problem, which allows, when resolved, to lighten calculation time for simulation algorithms
Sdiri, Bilel. „2D/3D Endoscopic image enhancement and analysis for video guided surgery“. Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.
Der volle Inhalt der QuelleMinimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
Mitra, Jhimli. „Multimodal Image Registration applied to Magnetic Resonance and Ultrasound Prostatic Images“. Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00786032.
Der volle Inhalt der QuelleWERTEL, FOURNIER ISABELLE. „L'iconographe dans le labyrinthe des mots et des images pour un imagier numerique comme espace cartographie de l'iconotheque“. Paris 8, 1999. http://www.theses.fr/1999PA081628.
Der volle Inhalt der QuelleAtnafu, Besufekad Solomon. „Modélisation et traitement de requêtes images complexes“. Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0032/these.pdf.
Der volle Inhalt der QuelleQuerying images by their contents is of increasing importance in many application domains. For this reason, many researches are conducted in the two fields: DBMS and pattern recognition. But, the majority of the existing systems do not offer any formal framework to integrate the techniques in the two fields for effective multi-criteria queries on images. However, having a formal framework is necessary if one intends to treat such types of image queries effectively. In this thesis, we present image data repository models that allow to formulate multi-criteria queries. Then, based on this model, we introduced a similarity-based algebra that enables to formulate similarity-based queries and multi-criteria queries effectively and presented properties of the operators. We also presented techniques for similarity-based query optimization. We then developed a prototype called EMIMS (Extended Medical Image Management System) to show that our proposals in practical applications
Duclos, Pierre. „Étude du parallélisme en traitement des images“. Nice, 1988. http://www.theses.fr/1988NICE4209.
Der volle Inhalt der QuelleAtnafu, Besufekad Solomon Brunie Lionel. „Modélisation et traitement de requêtes images complexes“. Villeurbanne : Doc'INSA, 2004. http://docinsa.insa-lyon.fr/these/pont.php?id=atnafu_besufekad.
Der volle Inhalt der QuelleHachicha, Walid. „Traitement, codage et évaluation de la qualité d’images stéréoscopiques“. Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132037.
Der volle Inhalt der QuelleRecent developments in 3D stereoscopic technology have opened new horizons in many application fields such as 3DTV, 3D cinema, video games and videoconferencing and at the same time raised a number of challenges related to the processing and coding of 3D data. Today, stereoscopic imaging technology is becoming widely used in many fields. There are still some problems related to the physical limitations of image acquisition systems, e.g. transmission and storage requirements. The objective of this thesis is the development of methods for improving the main steps of stereoscopic imaging pipeline such as enhancement, coding and quality assessment. The first part of this work addresses quality issues including contrast enhancement and quality assessment of stereoscopic images. Three algorithms have been proposed. The first algorithm deals with the contrast enhancement aiming at promoting the local contrast guided by calculated/estimated object importance map in the visual scene. The second and the third algorithms aim at predicting the distortion severity of stereo images. In the second one, we have proposed a fullreference metric that requires the reference image and is based on some 2D and 3D findings such as amplitude non-linearity, contrast sensitivity, frequency and directional selectivity, and binocular just noticeable difference model. While in the third algorithm, we have proposed a no-reference metric which needs only the stereo pair to predict its quality. The latter is based on Natural Scene statistics to identify the distortion affecting the stereo image. The statistic 3D features consist in combining features extracted from the natural stereo pair and those from the estimate disparity map. To this end, a joint wavelet transform, inspired from the vector lifting concept is first employed. Then, the features are extracted from the obtained subbands. The second part of this dissertation addresses stereoscopic image compression issues. We started by investigating a one-dimensional directional discrete cosine transform to encode the disparity compensated residual image. Afterwards, and based on the wavelet transform, we investigated two techniques for optimizing the computation of the residual image. Finally, we present efficient bit allocation methods for stereo image coding purpose. Generally, the bit allocation problem is solved in an empirical manner by looking for the optimal rates leading to the minimum distortion value. Thanks to recently published work on approximations of the entropy and distortion functions, we proposed accurate and fast bit allocation schemes appropriate for the open-loop and closed-loop based stereo coding structures
Darmet, Ludovic. „Vers une approche basée modèle-image flexible et adaptative en criminalistique des images“. Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-03086427.
Der volle Inhalt der QuelleImages are nowadays a standard and mature medium of communication.They appear in our day to day life and therefore they are subject to concernsabout security. In this work, we study different methods to assess theintegrity of images. Because of a context of high volume and versatilityof tampering techniques and image sources, our work is driven by the necessity to developflexible methods to adapt the diversity of images.We first focus on manipulations detection through statistical modeling ofthe images. Manipulations are elementary operations such as blurring,noise addition, or compression. In this context, we are more preciselyinterested in the effects of pre-processing. Because of storagelimitation or other reasons, images can be resized or compressed justafter their capture. Addition of a manipulation would then be applied on analready pre-processed image. We show that a pre-resizing of test datainduces a drop of performance for detectors trained on full-sized images.Based on these observations, we introduce two methods to counterbalancethis performance loss for a pipeline of classification based onGaussian Mixture Models. This pipeline models the local statistics, onpatches, of natural images. It allows us to propose adaptation of themodels driven by the changes in local statistics. Our first method ofadaptation is fully unsupervised while the second one, only requiring a fewlabels, is weakly supervised. Thus, our methods are flexible to adaptversatility of source of images.Then we move to falsification detection and more precisely to copy-moveidentification. Copy-move is one of the most common image tampering technique. Asource area is copied into a target area within the same image. The vastmajority of existing detectors identify indifferently the two zones(source and target). In an operational scenario, only the target arearepresents a tampering area and is thus an area of interest. Accordingly, wepropose a method to disentangle the two zones. Our method takesadvantage of local modeling of statistics in natural images withGaussian Mixture Model. The procedure is specific for each image toavoid the necessity of using a large training dataset and to increase flexibility.Results for all the techniques described above are illustrated on publicbenchmarks and compared to state of the art methods. We show that theclassical pipeline for manipulations detection with Gaussian MixtureModel and adaptation procedure can surpass results of fine-tuned andrecent deep-learning methods. Our method for source/target disentanglingin copy-move also matches or even surpasses performances of the latestdeep-learning methods. We explain the good results of these classicalmethods against deep-learning by their additional flexibility andadaptation abilities.Finally, this thesis has occurred in the special context of a contestjointly organized by the French National Research Agency and theGeneral Directorate of Armament. We describe in the Appendix thedifferent stages of the contest and the methods we have developed, as well asthe lessons we have learned from this experience to move the image forensics domain into the wild
Li, Xiaobing. „Automatic image segmentation based on level set approach: application to brain tumor segmentation in MR images“. Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001120.pdf.
Der volle Inhalt der QuelleThe aim of this dissertation is to develop an automatic segmentation of brain tumors from MRI volume based on the technique of "level sets". The term "automatic" uses the fact that the normal brain is symmetrical and the localization of asymmetrical regions permits to estimate the initial contour of the tumor. The first step is preprocessing, which is to correct the intensity inhomogeneity of volume MRI and spatially realign the MRI volumes of the same patient at different moments. The plan hemispherical brain is then calculated by maximizing the degree of similarity between the half of the volume and his reflexion. The initial contour of the tumor can be extracted from the asymmetry between the two hemispheres. This initial contour is evolved and refined by the technique "level set" in order to find the real contour of the tumor. The criteria for stopping the evolution have been proposed and based on the properties of the tumor. Finally, the contour of the tumor is projected onto the adjacent images to form the new initial contours. This process is iterated on all slices to obtain the segmentation of the tumor in 3D. The proposed system is used to follow up patients throughout the medical treatment period, with examinations every four months, allowing the physician to monitor the state of development of the tumor and evaluate the effectiveness of the therapy. The method was quantitatively evaluated by comparison with manual tracings experts. Good results are obtained on real MRI images
Tschumperlé, David. „PDE's based regularization of multivalued images and applications“. Nice, 2002. http://www.theses.fr/2002NICE5779.
Der volle Inhalt der QuelleWe are interested in PDE-based approaches for vector-valued image regularization, and its applications for a wide class of interesting image processing problems. The comparative study of existing methods allows us to propose a common mathematical framework, better adapted to understand the underlying diffusion geometry of the regularization processes, as well as design corresponding numerical schemes. Thus we develop a new multivalued image regularization approach that verifies important geometric properties. It can be used in a large range of regularization-related applications. We also tackle the problem of constrained regularization and propose a specific variational formalism unifying in a common framework, the equations acting on direction features~: unit vectors, rotation matrices, diffusion tensors, etc. Proposed solutions are analyzed and used with success to solve applications of interest, such as color image regularization and interpolation, flow visualization, regularization of rigid motions estimated from video sequences, and aided reconstruction of coherent fibers network models in the white matter of the~brain, using DT-MRI imaging
Perrotton, Xavier. „Détection automatique d'objets dans les images numériques : application aux images aériennes“. Paris, Télécom ParisTech, 2009. http://www.theses.fr/2009ENST0025.
Der volle Inhalt der QuelleThis thesis was conducted as part of a CIFRE contract between EADS Innovation Works and Telecom ParisTech. The presented work aims at defining techniques for the detection and localisation of objects in digital images. This work focused on methods based on AdaBoost because of its theoretical and practical effectiveness. A new descriptor robust to background and target texture variations is introduced. A first object detection framework based on this descriptor is proposed and applied. Observed results prove that a vision system can be trained on adapted simulated data and yet be efficient on real images. A first improvement allows the proposed cascade to explore the space of descriptors and thus to improve the modeling of the target. The idea developed here consists in the beginning to build a cascade with one type of descriptors and then introduce new kinds of descriptors when the current descriptor family does not bring enough differentiating information anymore. We present a novel boosting based learning approach which automatically learns a multi-view detector without using intra-class sub-categorization based on prior knowledge. An impicit hierarchical structure enables both a precise modelling and an efficient sharing of descriptors between views. These two complementary approaches are finnally merged to obtain a complete algorithm chain. With the application of this model to different tasks of detection, we verified, on one hand, the efficiency of multi-view model to learn different appearances and poses, and on the other hand, the performances improvement made by the combination of families of descriptors
Huet-Guillemot, Florence. „Fusion d'images segmentees et interpretees. Application aux images aeriennes“. Cergy-Pontoise, 1999. http://biblioweb.u-cergy.fr/theses/99CERG0057.pdf.
Der volle Inhalt der QuelleTurcotte, Maryse. „Méthode basée sur la texture pour l'étiquetage des images“. Sherbrooke : Université de Sherbrooke, 2000.
Den vollen Inhalt der Quelle findenHarp, Josselin. „Contribution à la segmentation des images : applications à l'estimation des plans texturés dans des images planes et omnidirectionnelles“. Amiens, 2003. http://www.theses.fr/2003AMIE0313.
Der volle Inhalt der QuelleAinouz, Samia. „Analyse et traitement des images codées en polarisation“. Phd thesis, Université Louis Pasteur - Strasbourg I, 2006. http://tel.archives-ouvertes.fr/tel-00443685.
Der volle Inhalt der QuelleBen, Arab Taher. „Contribution des familles exponentielles en traitement des images“. Phd thesis, Université du Littoral Côte d'Opale, 2014. http://tel.archives-ouvertes.fr/tel-01019983.
Der volle Inhalt der QuelleDaniel, Tomasz. „Dispositif pour le traitement numérique des images magnétooptiques“. Paris 11, 1985. http://www.theses.fr/1985PA112256.
Der volle Inhalt der QuelleAn electronic system is described which effectively substracts non magnetic contrast of magnetooptical images. At the same time noise is reduced to such a degree that the contrast visibility limit in domain observation is expanded by at least an order of magnitude. The incoming video signal, sampled at 10 Hz and coded at 8 bits is stored in one of the three 512x512x8 image memories. High speed arithmetic processor ads and substracts images, pixel by pixel, at 10 bytes/s. The device is controlled by a 6809 microprocessor. The m in mode of operation is to compare the image of a sample with magnetic domains with the image of the sample in saturated state. The difference is then cumulated to achieve necessary noise reduction. Improvements of longitudinal Kerr effect images are shown
Bes, Marie-Thérèse. „Traitement automatique d'images scintigraphiques : application aux images cardiaques“. Toulouse 3, 1987. http://www.theses.fr/1987TOU30054.
Der volle Inhalt der QuelleContassot-Vivier, Sylvain. „Calculs paralleles pour le traitement des images satellites“. Lyon, École normale supérieure (sciences), 1998. http://www.theses.fr/1998ENSL0087.
Der volle Inhalt der QuelleHoré, Alain. „Traitement des images bidimensionnelles à l'aide des FPGAs /“. Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2005. http://theses.uqac.ca.
Der volle Inhalt der QuelleBes, Marie-Thérèse. „Traitement automatique d'images scintigraphiques application aux images cardiaques /“. Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37602964k.
Der volle Inhalt der QuelleZampieri, Karine. „IMAGEB, un système de traitement d'images“. Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37610884n.
Der volle Inhalt der QuelleVilcahuaman, Cajacuri Luis Alberto. „Early diagnostic of diabetic foot using thermal images“. Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-01022921.
Der volle Inhalt der QuelleGoudail, François. „Localisation d'objets dans des images fortement bruitées : une approche probabiliste paramétrique“. Aix-Marseille 3, 1997. http://www.theses.fr/1997AIX30013.
Der volle Inhalt der QuelleChevallier, Emmanuel. „Morphologie, Géométrie et Statistiques en imagerie non-standard“. Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0082/document.
Der volle Inhalt der QuelleDigital image processing has followed the evolution of electronic and computer science. It is now current to deal with images valued not in {0,1} or in gray-scale, but in manifolds or probability distributions. This is for instance the case for color images or in diffusion tensor imaging (DTI). Each kind of images has its own algebraic, topological and geometric properties. Thus, existing image processing techniques have to be adapted when applied to new imaging modalities. When dealing with new kind of value spaces, former operators can rarely be used as they are. Even if the underlying notion has still a meaning, a work must be carried out in order to express it in the new context.The thesis is composed of two independent parts. The first one, "Mathematical morphology on non-standard images", concerns the extension of mathematical morphology to specific cases where the value space of the image does not have a canonical order structure. Chapter 2 formalizes and demonstrates the irregularity issue of total orders in metric spaces. The main results states that for any total order in a multidimensional vector space, there are images for which the morphological dilations and erosions are irregular and inconsistent. Chapter 3 is an attempt to generalize morphology to images valued in a set of unordered labels.The second part "Probability density estimation on Riemannian spaces" concerns the adaptation of standard density estimation techniques to specific Riemannian manifolds. Chapter 5 is a work on color image histograms under perceptual metrics. The main idea of this chapter consists in computing histograms using local Euclidean approximations of the perceptual metric, and not a global Euclidean approximation as in standard perceptual color spaces. Chapter 6 addresses the problem of non parametric density estimation when data lay in spaces of Gaussian laws. Different techniques are studied, an expression of kernels is provided for the Wasserstein metric
Chastel, Serge. „Contribution de la théorie des hypergraphes au traitement des images numériques“. Saint-Etienne, 2001. http://www.theses.fr/2001STET4008.
Der volle Inhalt der QuelleRecent developments in image processing have shown the promises of discrete formalizations of the digital picture and their study from the point of view of combinatorics. In this document we focus on the modeling of the digital picture by the means of hypergraphs. . . . [etc. ]
Le, hir Juliette. „Conception mixte d’un capteur d’images intelligent intégré à traitements locaux massivement parallèles“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC107/document.
Der volle Inhalt der QuelleSmart sensors allow embeddedsystems for analysing their environment withoutany transmission of raw data, which consumes alot of power. This thesis presents an imagesensor integrating image processing tasks. Twofigures of merit are introduced in order toclassify the state of the art of smart imagersregarding their versatility and their preservationof photosensitive area. This shows a trade-offthat this work aims at improving by using amacropixel approach. By merging processingelements (PEs) between several pixels,processing tasks are both massively parallel andpotentially more versatile at givenphotosensitive area. An adaptation of spatial andtemporal filtering, matching such anarchitecture is proposed (downsampling by3x3 and 2x2 pixels respectively for eachprocessing task) and functionnally validated. Anarchitecture of asymmetric macropixels is thuspresented. The designed PE is an analogswitched capacitor circuit that is controlled byout-of-matrix digital electronics. The sizing ofthe PE is discussed over the trade-off betweenaccuracy and area, and implemented in anapproximate computing approach in our study.The proposed matrix of pixels and PEs issimulated in post-layout extracted views andshows good results on computed images of edgedetection or temporal difference, with a 28% fillfactor
Capri, Arnaud. „Caractérisation des objets dans une image en vue d'une aide à l'interprétation et d'une compression adaptée au contenu : application aux images échographiques“. Orléans, 2007. http://www.theses.fr/2007ORLE2020.
Der volle Inhalt der QuellePuteaux, Pauline. „Analyse et traitement des images dans le domaine chiffré“. Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS119.
Der volle Inhalt der QuelleDuring the last decade, the security of multimedia data, such as images, videos and 3D data, has become a major issue. With the development of the Internet, more and more images are transmitted over networks and stored in the cloud. This visual data is usually personal or may have a market value. Thus, computer tools have been developed to ensure their security.The purpose of encryption is to guarantee the visual confidentiality of images by making their content random. Moreover, during the transmission or archiving of encrypted images, it is often necessary to analyse or process them without knowing their original content or the secret key used during the encryption phase. This PhD thesis proposes to address this issue. Indeed, many applications exist such as secret images sharing, data hiding in encrypted images, images indexing and retrieval in encrypted databases, recompression of crypto-compressed images, or correction of noisy encrypted images.In a first line of research, we present a new method of high-capacity data hiding in encrypted images. In most state-of-the-art approaches, the values of the least significant bits are replaced to achieve the embedding of a secret message. We take the opposing view of these approaches by proposing to predict the most significant bits. Thus, a significantly higher payload is obtained, while maintaining a high quality of the reconstructed image. Subsequently, we showed that it was possible to recursively process all bit planes of an image to achieve data hiding in the encrypted domain.In a second line of research, we explain how to exploit statistical measures (Shannon entropy and convolutional neural network) in small pixel blocks (i.e. with few samples) to discriminate a clear pixel block from an encrypted pixel block in an image. We then use this analysis in an application to correct noisy encrypted images.Finally, the third line of research developed in this thesis concerns the recompression of crypto-compressed images. In the clear domain, JPEG images can be recompressed before transmission over low-speed networks, but the operation is much more complex in the encrypted domain. We then proposed a method for recompressing crypto-compressed JPEG images directly in the encrypted domain and without knowing the secret key, using a bit shift of the reorganized coefficients
Aziz, Fatima. „Approche géométrique couleur pour le traitement des images catadioptriques“. Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0080/document.
Der volle Inhalt der QuelleThis manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method
Piedpremier, Julien. „Les grandes images“. Paris 8, 2005. http://www.theses.fr/2005PA082545.
Der volle Inhalt der Quelle