Rozprawy doktorskie na temat „Analyses d’images”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Analyses d’images”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Plougonven, Erwan Patrick Yann. "Lien entre la microstructure des matériaux poreux et leur perméabilité : mise en évidence des paramètres géométriques et topologiques influant sur les propriétés de transport par analyses d’images microtomographiques". Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13847/document.
Pełny tekst źródłaThe objective of this work is to develop 3D image analysis tools to study the micronic pore structure of porous materials, obtained by X-ray microtomography, and study the relation between microgeometry and macroscopic transport properties. From a binarised image of the pore space, a complete sequence of processing (artefact filtration, skeletonisation, watershed, etc. ) is proposed for positioning and delimiting the pores. A comparison with available methods is performed, and a methodology to qualify the robustness of these processes is presented. The decomposition is used, firstly for extracting geometric parameters of the porous microstructure and studying the relation with intrinsic permeability; secondly to produce a simplified pore network on which to perform numerical simulations
Robinault, Lionel. "Mosaïque d’images multi résolution et applications". Thesis, Lyon 2, 2009. http://www.theses.fr/2009LYO20039.
Pełny tekst źródłaThe thesis considers the of use motorized cameras with 3 degrees of freedom which are commonly called PTZ cameras. The orientation of such cameras is controlled according to two angles: the panorama angle (θ) describes the degree of rotation around on vertical axis and the tilt angle (ϕ) refers to rotation along a meridian line. Theoretically, these cameras can cover an omnidirectional field of vision of 4psr. Generally, the panorama angle and especially the tilt angle are limited for such cameras. In addition to control of the orientation of the camera, it is also possible to control focal distance, thus allowing an additional degree of freedom. Compared to other material, PTZ cameras thus allow one to build a panorama of very high resolution. A panorama is a wide representation of a scene built starting from a collection of images. The first stage in the construction of a panorama is the acquisition of the various images. To this end, we made a theoretical study to determine the optimal paving of the sphere with rectangular surfaces to minimize the number of zones of recovery. This study enables us to calculate an optimal trajectory of the camera and to limit the number of images necessary to the representation of the scene. We also propose various processing techniques which appreciably improve the rendering of the mosaic image and correct the majority of the defaults related to the assembly of a collection of images which were acquired with differing image capture parameters. A significant part of our work was used to the automatic image registration in real time, i.e. lower than 40ms. The technology that we developed makes it possible to obtain a particularly precise image registration with an computation time about 4ms (AMD1.8MHz). Our research leads directly to two proposed applications for the tracking of moving objects. The first involves the use of a PTZ camera and a spherical mirror. The combination of these two elements makes it possible to detect any motion object in the scene and to then to focus itself on one of them. Within the framework of this application, we propose an automatic algorithm of calibration of the system. The second application exploits only PTZ camera and allows the segmentation and the tracking of the objects in the scene during the movement of the camera. Compared to the traditional applications of motion detection with a PTZ camera, our approach is different by the fact that it compute a precise segmentation of the objects allowing their classification
Madani, Ikram. "Plasticité du système racinaire du blé en condition de carence en N, P ou K révélée par développement d'une méthodologie de phénotypage intégrant les poils absorbants". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2022. http://www.theses.fr/2022UMONG059.
Pełny tekst źródłaLow macroelement availability in most cultivated soils severely limits crop yields in the absence of fertilization. A better understanding of the adaptation of root systems to nutrient-poor soils, and the exploitation of existing genetic diversity in this field, between species and/or varieties, are likely to contribute to the development of new cultivars and new agronomic practices allowing to limit costly and environmentally polluting chemical fertilization inputs. The architecture of the root system and the production of root hairs at the root-soil interface are major determinants of the capacity of the root system to explore the soil and take up nutrient ions. To date, no methodology has been available to phenotype root hairs in a root system considered entirely. In this thesis, I developed a methodology for global, integrative phenotyping of root systems, including root hairs. An original rhizobox-type device was developed, allowing to acquire high resolution images, for which I developed a computerized analysis procedure associating the free software Ilastik for image segmentation, and the softwares WinRHIZOTM and ImageJ for the analysis of global traits characterizing the root development. After validation of the methodology, the root systems of two wheat genotypes, a cultivated emmer wheat cultivar (T.t. dicoccum, cv Escandia), ancestor of durum wheat, and a landrace of durum wheat (T.t. durum, cv Oued Zenati) were compared with each other and with respect to their response to low phosphate (P), nitrogen (N) or potassium (K) availability. In 15-day-old seedlings (roots ca. 30 cm long), N, P or K deficiencies differentially affected plant growth (biomass allocation between roots and leaves, and preferential development of the root system). All three deficiencies were found to result in an increase in the total surface area of the root system, resulting primarily from an increase in the total surface area of root hairs over the entire root system (reflecting an increase in the density and/or length of hairs over the entire system). The rate of increase in total absorptive root hair area was variable between the two varieties and among limiting elements, stronger under N deficiency conditions in the emmer wheat, and P deficiency in the landrace. All the root responses analyzed, including or not the root hairs, revealed a greater developmental plasticity in response to nutrient deficiency in the ancestral variety. A perspective opened by this work would be to compare this plasticity in different wheat varieties recapitulating the domestication and improvement of this species. I also show that the methodology I have developed can be used to phenotype root responses to biotic conditions (presence of Plant Growth Promoting Rhizobacteria)
Boucher, Arnaud. "Recalage et analyse d’un couple d’images : application aux mammographies". Thesis, Paris 5, 2013. http://www.theses.fr/2013PA05S001/document.
Pełny tekst źródłaIn the scientific world, signal analysis and especially image analysis is a very active area, due to the variety of existing applications, with issues such as file compression, video surveillance or medical image analysis. This last area is particularly active. The number of existing devices and the number of pictures taken, cause the production of a large amount of information to be processed by practitioners. They can now be assisted by computers.In this thesis, the problem addressed is the development of a computer diagnostic aided system based on conjoint analysis, and therefore on the comparison of medical images. This approach allows to look for evolutions or aberrant tissues in a given set, rather than attempting to characterize, with a strong a priori, the type of fabric sought.This problem allows to apprehend an aspect of the analysis of medical file performed by experts which is the study of a case through the comparison of evolutions.This task is not easy to automate. The human eye performs quasi-automatically treatments that we need to replicate.Before comparing some region on the two images, we need to determine where this area is located on both pictures. Any automated comparison of signals requires a registration phase, an alignment of components present on the pictures, so that they occupy the same space on the two images. Although the characteristics of the processed images allow the development of a smart registration, the projection of a 3D reality onto a 2D image causes differences due to the orientation of the tissues observed, and will not allow to analyze a pair of shots with a simple difference between images. Different structuring of the pictures and different deformation fields are developed here to efficiently address the registration problem.After having minimized the differences on the pictures, the analysis of tissues evolution is not performed at pixels level, but the tissues themselves, as will an expert. To process the images in this logic, they will be reinterpreted, not as pixels of different brightness, but as patterns representative of the entire image, enabling a new decomposition of the pictures. The advantage of such a representation is that it allows to highlight another aspect of the signal, and analyze under a new perspective the information necessary to the diagnosis aid.This thesis has been carried out in the LIPADE laboratory of University Paris Descartes (SIP team, specialized in image analysis) and in collaboration with the Society Fenics (designer of diagnosis aid stations in the analysis of mammograms) under a Cifre convention. The convergence of the research fields of those teams led to the development of this document
Lelièvre, Stéphanie. "Identification et caractérisation des frayères hivernales en Manche Orientale et la partie sud de la mer du Nord : Identification des oeufs de poissons, cartographie et modélisation des habitats de ponte". Nantes, 2010. http://www.theses.fr/2010NANT2110.
Pełny tekst źródłaA better knowledge and monitoring of principal commercial fish spawning grounds have become necessary in the North Sea. The efficiency of CUFES was proved by sampling pelagic fish eggs in winter in Eastern Channel and Southern North Sea. Fish egg taxonomic identification based on visual criteria cannot always be carried out effectively. In particular, cod (Gadus morhua), and whiting (Merlangius merlangus) or flounder (Platichthys flesus) and dab (Limanda limanda) have the same range of egg diameter and similar morphologies. Alternative identification methods using molecular techniques were developed to improve the accuracy of egg taxonomic identification. First, PCR-RFLP method, then, in order to accelerate egg identification, the use of a new laboratory imaging system, the ZooScan, able to produce high resolution images of zooplankton samples, was adapted to fish eggs and allower their automated identification using supervised learning algorithms. The location of winter spawning grounds of fishes in the Southern North Sea and the Eastern Channel was illustrated using yearly maps and analysed over the available period to define recurrent, occasional and unfavorable spawning areas. Generally, fish eggs were found over the study area, except for the North Western of the North Sea, near Scottish coasts. Important spawning areas were clearly localised along the Belgian, Dutch and Danish coasts. Habitat modelling of these fish spawning areas was carried out using both GLM (Generalised Linear Model) and QR (Regression Quantile) and associated egg abundance to physical conditions such as temperature, salinity, bedstress, chlorophyll a concentration and bottom sediment types to characterize spawning habitat conditions and predict their extent and location. The results of this approach improve the understanding of spawning grounds distribution and were discussed in the context of the protection and conservation of critical spawning grounds
PORTES, DE ALBUQUERQUE MARCelO. "Analyse par traitement d’images de domaines magnétiques". Grenoble INPG, 1995. http://www.theses.fr/1995INPG0036.
Pełny tekst źródłaBoucher, Arnaud. "Recalage et analyse d’un couple d’images : application aux mammographies". Electronic Thesis or Diss., Paris 5, 2013. http://www.theses.fr/2013PA05S001.
Pełny tekst źródłaIn the scientific world, signal analysis and especially image analysis is a very active area, due to the variety of existing applications, with issues such as file compression, video surveillance or medical image analysis. This last area is particularly active. The number of existing devices and the number of pictures taken, cause the production of a large amount of information to be processed by practitioners. They can now be assisted by computers.In this thesis, the problem addressed is the development of a computer diagnostic aided system based on conjoint analysis, and therefore on the comparison of medical images. This approach allows to look for evolutions or aberrant tissues in a given set, rather than attempting to characterize, with a strong a priori, the type of fabric sought.This problem allows to apprehend an aspect of the analysis of medical file performed by experts which is the study of a case through the comparison of evolutions.This task is not easy to automate. The human eye performs quasi-automatically treatments that we need to replicate.Before comparing some region on the two images, we need to determine where this area is located on both pictures. Any automated comparison of signals requires a registration phase, an alignment of components present on the pictures, so that they occupy the same space on the two images. Although the characteristics of the processed images allow the development of a smart registration, the projection of a 3D reality onto a 2D image causes differences due to the orientation of the tissues observed, and will not allow to analyze a pair of shots with a simple difference between images. Different structuring of the pictures and different deformation fields are developed here to efficiently address the registration problem.After having minimized the differences on the pictures, the analysis of tissues evolution is not performed at pixels level, but the tissues themselves, as will an expert. To process the images in this logic, they will be reinterpreted, not as pixels of different brightness, but as patterns representative of the entire image, enabling a new decomposition of the pictures. The advantage of such a representation is that it allows to highlight another aspect of the signal, and analyze under a new perspective the information necessary to the diagnosis aid.This thesis has been carried out in the LIPADE laboratory of University Paris Descartes (SIP team, specialized in image analysis) and in collaboration with the Society Fenics (designer of diagnosis aid stations in the analysis of mammograms) under a Cifre convention. The convergence of the research fields of those teams led to the development of this document
Wang, Long. "Etude de l’influence de la microstructure sur les mécanismes d’endommagement dans des alliages Al-Si de fonderie par des analyses in-situ 2D et 3D". Thesis, Ecole centrale de Lille, 2015. http://www.theses.fr/2015ECLI0004/document.
Pełny tekst źródłaAn experimental protocol was developed in this thesis in order to study the influence of casting microstructure on the fatigue behavior in Lost Foam Casting Al-Si alloys in tension and in Low Cycle Fatigue at room temperature. First of all, the microstructures of studied alloys were thoroughly characterized both in 2D and in 3D. The most suitable and representative specimens and Region of Interest (ROIs) where the in-situ monitoring was performed were selected through a preliminary characterization using X-ray tomography, which is also necessary to understand damage mechanisms after failure. In-situ observations performed on surface using Questar long distance microscope and in volume using X-ray tomography allow following cracks initiations and their propagations and thus allow identifying the relation between damage mechanisms and casting microstructure. 2D/3D displacement and strain fields measured using Digital Image Correlation and Digital Volume Correlation allows analyzing the relation between measured fields and damage mechanisms. Postmortem analysis and FEM simulation gave more information for the damage mechanisms. Large pores favor crack initiation as they strongly increase local stress level. Hard inclusions (Si phase, iron intermetallics and copper containing phases) also play an important role in crack initiation and propagation due to strain localizations at these inclusions
Petit, Cécile. "Analyse d’images macroscopiques appliquée à l’injection directe Diesel". Saint-Etienne, 2006. http://www.theses.fr/2006STET4005.
Pełny tekst źródłaDue to emission standards, car manufacturers have to improve combustion. It can be achieved studying Diesel direct injection, particularly fuel atomization as this one is responsible for the mixture quality. The Diesel macroscopic spray is investigated using image processing. An image reference point is first calculated: the virtual spray origin VSO, deduced from the elongated spray plumes primary inertia axes and from a Voronoï diagram. These plumes are analyzed calculating their penetration, angle and barycenter. Afterwards, the line deduced from the spray plume boundary, passing by the virtual injection center, is evaluated. This axis is the reference for the internal symmetry, set in terms of correlation, distances: absolute, Euclidian, infinite and logarithmic which is based on the Logarithmic Image Processing model. This last distance enables to compare sprays acquired in different conditions (light source, ambient medium), it is the liquid continuous core internal symmetry. Then the line deduced from the plume grey levels, forced to pass by the VSO, with the distance to the VSO as additional weight, is calculated. This axis is the basis of the external symmetry, established in terms of correlation, distances: absolute, Euclidian, infinite and Hausdorff. Finally, a spray image can be evaluated using one parameter as the Asplünd distance, circularities, or barycenter. Then penetration and angle populations study show their correlation, variation part to part and plume to plume, non Gaussian distributions. Afterwards, injectors are compared using the image processing parameters. Finally, the data tendencies study show how promising the image processing parameters are
Journet, Nicholas. "Analyse d’images de documents anciens : une approche texture". La Rochelle, 2006. http://www.theses.fr/2006LAROS178.
Pełny tekst źródłaMy phd thesis subject is related to the topic of old documents images indexation. The corpus of old documents has specific characteristics. The content (text and image) as well as the layout information are strongly variable. Thus, it is not possible to work on this corpus such as it usually done with contemporary documents. Indeed, the first tests which we realised on the corpus of the “Centre d’Etude de la Renaissance”, with which we work, confirmed that the traditional approaches (driven –model approaches) are not very efficient because it’s impossible to put assumptions on the physical or logical structure of the old documents. We also noted the lack of tools allowing the indexing of large old documents images databases. In this phd work, we propose a new generic method which permits characterization of the contents of old documents images. This characterization is carried out using a multirésolution study of the textures contained in the images of documents. By constructing signatures related with the frequencies and the orientations of the various parts of a page it is possible to extract, compare or to identify different kind of semantic elements (reference letters, illustrations, text, layout. . . ) without making any assumptions about the physical or logical structure of the analyzed documents. These textures information are at the origin of creation of indexing tools for large databases of old documents images
Margeta, Ján. "Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques". Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0055/document.
Pełny tekst źródłaThe recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images
Farage, Grégory. "Filtrage multiéchelle et turbo filtrage d’images polarimétriques". Thèse, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6748.
Pełny tekst źródłaHuck, Alexis. "Analyse non-supervisée d’images hyperspectrales : démixage linéaire et détection d’anomalies". Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX30036.
Pełny tekst źródłaThis thesis focusses on two research fields regarding unsupervised analysis of hyperspectral images (HSIs). Under the assumptions of the linear spectral mixing model, the formalism of Non-Negative Matrix Factorization is investigated for unmixing purposes. We propose judicious spectral and spatial a priori knowledge to regularize the problem. In addition, we propose an estimator for the projected gradient optimal step-size. Thus, suitably regularized NMF is shown to be a relevant approach to unmix HSIs. Then, the problem of anomaly detection is considered. We propose an algorithm for Anomalous Component Pursuit (ACP), simultaneously based on projection pursuit and on a probabilistic model and hypothesis testing. ACP detects the anomalies with a constant false alarm rate and discriminates them into spectrally homogeneous classes
Debroutelle, Teddy. "Détection et classification de décors gravés sur des céramiques anciennes par analyse d’images". Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2015/document.
Pełny tekst źródłaThe ARCADIA project aims to develop an automatic method for analyzing engraved decorations on ceramic sherds to facilitate the interpretation of this archaeological heritage. It is to replace the manual and tedious procedure carried out by the archaeologist since the corpus increased to more 38000 sherds. The ultimate goal is to grouping all the decorations created with the same wheel by a poter. We developped a complete chain from the 3Dscanning of the sherd to the automatic classification of the decorations according to their style (diamonds, square, chevrons, oves, etc). In this context, several contributions are proposed implementing methods of image analysis and machine learning. From the 3Dpoint cloud, a depth map is extracted and an original method is applied to automatically detect the salient region centered onto the decoration. Then, a new descriptor, called Blob-SIFT, is proposed to collect signatures only in the relevant areas and characterize the decoration to perform the classification. This approach adapted to each sherd, allows both to reduce significantly the mass of data and improve classification rates. We also use deep learning, and propose an hybrid approach combining local features extracted by Blob-SIFT with global features provided by deep learning to increase the classification performance
Guénard, Jérôme. "Synthèse de modèles de plantes et reconstructions de baies à partir d’images". Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0101/document.
Pełny tekst źródłaPlants are essential elements of our world. Thus, 3D plant models are necessary to create realistic virtual environments. Mature computer vision techniques allow the reconstruction of 3D objects from images. However, due to the complexity of the topology of plants, dedicated methods for generating 3D plant models must be devised. This thesis is divided into two parts. The first part focuses on the modeling of biologically realistic plants from a single image. We propose to generate a 3D model of a plant, using an analysis-by-synthesis method considering both a priori information of the plant species and a single image. First, a dedicated 2D skeletonisation algorithm generates possible branching structures from the foliage segmentation. Then, we built a 3D generative model based on a parametric model of branching systems taking into account botanical knowledge. The resulting skeleton follows the hierarchical organisation of natural branching structures. Varying parameter values of the generative model (main branching structure of the plant and foliage), we produce a series of candidate models. A Bayesian model optimizes a posterior criterion which is composed of a likelihood function which measures the similarity between the image and the reprojected 3D model and a prior probability measuring the realism of the model. After modeling plant models branching systems and foliage, we propose to model the fruits. As we mainly worked on vines, we propose a method for reconstructing a vine grape from at least two views. Each bay is considered to be an ellipsoid of revolution. The resulting method can be adapted to any type of fruits with a shape similar to a quadric of revolution. The second part of this thesis focuses on the reconstruction of quadrics of revolution from one or several views. Reconstruction of quadrics, and in general, 3D surface reconstruction is a very classical problem in computer vision. First, we recall the necessary background in projective geometry quadrics and computer vision and present existing methods for the reconstruction of quadrics or more generally quadratic surfaces. A first algorithm identifies the images of the principal foci of a quadric of revolution from a "calibrated" view (that is, the intrinsic parameters of the camera are given). Then we show how to use this result to reconstruct, from a linear triangulation scheme, any type of quadrics of revolution from at least two views. Finally, we show that we can derive the 3D pose of a given quadric of revolution from a single occluding contour. We evaluate the performance of our methods and show some possible applications
Ruggieri, Vito Giovanni. "Analyse morphologique des bioprothèses valvulaires aortiques dégénérées par segmentation d’images TDM". Rennes 1, 2012. https://ecm.univ-rennes1.fr/nuxeo/site/esupversions/2be5652f-691e-4682-a0a0-e8db55bb95d9.
Pełny tekst źródłaThe aim of the study was to assess the feasibility of CT based 3D analysis of degenerated aortic bioprostheses to make easier their morphological assessment. This could be helpful during regular follow-up and for case selection, improved planning and mapping of valve-in-valve procedure. The challenge was represented by leaflets enhancement because of highly noised CT images. Contrast-enhanced ECG-gated CT scan was performed in patients with degenerated aortic bioprostheses before reoperation (in-vivo images). Different methods for noise reduction were tested and proposed. 3D reconstruction of bioprostheses components was achieved using stick based region segmentation methods. After reoperation, segmentation methods were applied to CT images of the explanted prostheses (ex-vivo images). Noise reduction obtained by improved stick filter showed best results in terms of signal to noise ratio comparing to anisotropic diffusion filters. All segmentation methods applied to in-vivo images allowed 3D bioprosthetic leaflets reconstruction. Explanted bioprostheses CT images were also processed and used as reference. Qualitative analysis revealed a good concordance between the in-vivo images and the bioprostheses alterations. Results from different methods were compared by means of volumetric criteria and discussed. ECG-gated CT images of aortic bioprostheses need a preprocessing to reduce noise and artifacts in order to enhance prosthetic leaflets. Stick region based segmentation seems to provide an interesting approach for the morphological characterization of degenerated bioprostheses
Bonneau, Stephane. "Chemins minimaux en analyse d’images : nouvelles contributions et applications à l’imagerie biologique". Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090062.
Pełny tekst źródłaIntroduced first in image analysis to globally minimize the geodesic active contour functionnal, minimal paths techniques are robust tools for extracting open and closed contours from images. Minimal paths are computed by solving the Eikonal equation on a discrete grid with an efficient algorithm called Fast Marching. In this thesis, we present novel approaches based on minimal paths. The interest of these techniques is illustrated by the analysis of biological images. This thesis consists of three parts. In the first part, we review the relevant litterature in boundary-based deformable models and minimal paths techniques. In the second part, we propose a new approach for automatically detecting and tracking, in sequences of 2D fluorescence images, punctual objects which are intermittently visible. Trajectories of moving objects, considered as minimal paths in a spatiotemporal space, are retrieved using a perceptual grouping approach based on front propagation in the 2D+T volume. The third part adresses the problem of surface extraction in 3D images. First, we introduce a front propagation approach to distribute a set of points on a closed surface. Then, we propose a method to extract a surface patch from a single point by constructing a dense network of minimal paths. We finally present an extension of this method to extract a closed surface, in a fast and robust manner, from a few points lying on the surface
Huart, Jérémy. "Extraction et analyse d’objets-clés pour la structuration d’images et de vidéos". Grenoble INPG, 2007. http://www.theses.fr/2007INPG0017.
Pełny tekst źródłaThe compact description of image and video content is currently a difficult task. We are interested in the objects that make up this content because of the representative power of these objects. After a review of the state of the art, this thesis presents a local segmentation method based on the irregular graph pyramid algorithm, which allows us to extract, using low-level features, regions of interest comparable to semantic objects. This method is used to precisely excise objects from still images, first in an interactive environment and then in an entirely automatic one. A motion estimation allows us to extend the process to videos by extracting the foreground entities from every frame. A filtering and a clustering of these entities allow us to retain only the most representative of each real object in the shot. These representations are called key-objects and key-views. The quality of the experimental results allows us to propose some future applications of our methods
Burte, Victor. "Étude des stratégies de mouvement chez les parasitoïdes du genre Trichogramma : apports des techniques d’analyse d’images automatiques". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4223/document.
Pełny tekst źródłaParasitoids of the genus Trichogramma are oophagous micro-hymenoptera widely used as biological control agents. My PhD is about the phenotypic characterization of this auxiliary's movement strategies, specifically the movements involved in the exploration of space and the search for host eggs. These phenotypes have great importance in the life cycle of trichogramma, and also of characters of interest to evaluate their effectiveness in biological control program. Trichogramma being very small organisms (less than 0.5 mm), difficult to observe, the study of their movement can take advantage of technological advances in the acquisition and automatic analysis of images. This is the strategy I followed by combining a methodological development component and an experimental component. In a first methodological part, I present three main types of image analysis methods that I used and helped to develop during my thesis. In a second time, I present three applications of these methods to the study of the movement of Trichogramma. First, we characterized in the laboratory the orientation preferences (phototaxis, geotaxis and their interaction) during egg laying in 22 trichogram strains belonging to 6 species. This type of study requires the counting of a large number of eggs (healthy and parasitized), it was developed a new dedicated tool in the form of an ImageJ / FIJI plugin made available to the community. This flexible plugin automates and makes more productive the tasks of counting and evaluation of parasitism rate, making possible screenings of greater magnitude. A great variability could be highlighted within the genus, including between strains of the same species. This suggests that depending on the plant layer to be protected (grass, shrub, tree), it would be possible to select trichogramma’s strains to optimize their exploitation of the targeted area. In a second time, we characterized the exploration strategies (velocities, trajectories, ...) of a set of 22 strains from 7 trichogramma species to look for traits specific to each strain or species. I implemented a method for tracking a trichogramma group on video recorded on short time scales using the Ctrax software and R scripts. The aim was to develop a protocol for high-throughput characterization of trichogramma strains movement and to study the variability of these traits within the genus. Finally, we conducted a study of the propagation dynamics in trichogramma group from the species T. cacoeciae, by developing an innovative experimental device to cover scales of time and space greater than those usually imposed by laboratory constraints. Through the use of pictures taken at very high resolution / low frequency and a dedicated analysis pipeline, the diffusion of individuals can be followed in a tunnel longer than 6 meters during a whole day. In particular, I was able to identify the effect of the population density as well as the distribution of resources on the propagation dynamics (diffusion coefficient) and the parasitism efficiency of the tested strain
Kieu, Van Cuong. "Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques". Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Pełny tekst źródłaIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Bricola, Jean-Charles. "Estimation de cartes de profondeur à partir d’images stéréo et morphologie mathématique". Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM046/document.
Pełny tekst źródłaIn this thesis, we introduce new approaches dedicated to the computation of depth maps associated with a pair of stereo images.The main difficulty of this problem resides in the establishment of correspondences between the two stereoscopic images. Indeed, it is difficult to ascertain the relevance of matches occurring in homogeneous areas, whilst matches are infeasible for pixels occluded in one of the stereo views.In order to handle these two problems, our methods are composed of two steps. First, we search for reliable depth measures, by comparing the two images of the stereo pair with the help of their associated segmentations. The analysis of image superimposition costs, on a regional basis and across multiple scales, allows us to perform relevant cost aggregations, from which we deduce accurate disparity measures. Furthermore, this analysis facilitates the detection of the reference image areas, which are potentially occluded in the other image of the stereo pair. Second, an interpolation mechanism is devoted to the estimation of depth values, where no correspondence could have been established.The manuscript is divided into two parts: the first will allow the reader to become familiar with the problems and issues frequently encountered when analysing stereo images. A brief introduction to morphological image processing is also provided. In the second part, our algorithms to the computation of depth maps are introduced, detailed and evaluated
Durand, Stan. "Taille et forme des particules des constituants des supports de culture horticoles. Relations avec leurs propriétés physiques". Electronic Thesis or Diss., Rennes, Agrocampus Ouest, 2023. http://www.theses.fr/2023NSARC168.
Pełny tekst źródłaIn soilless culture, a wise management of water is necessary to increase the yield of a crop. The water and air retention properties of horticultural substrates are closely linked to the morphology of the particles, which determines their arrangement and from which a pore space is formed that makes up at least 85% of the volume. The detailed morphology of the particles has never been really studied for growing media, partly because the analysis is complex, due to the great diversity of particle sizes and shapes. Only sieving has been used to characterize the particles, however this method has many limitations (inaccurate, not very informative, not adapted to all particle shapes). In order to detail the links between physical properties and particle morphology, the presented research relies on the use of dynamic image analysis, offering more precise anddetailed results.Various measurements of particle size distributions and physical properties on a wide variety of materials have been performed. The results reveal a very large diversity of particle size and shape within each material. The morphology of the particles can be summarized by their circularity and length. The smaller the particle size, the more fine pores the growing media has, the more water it retains, and conversely the less air. Also, the evolution of the material structure is impacted by finer particles. Finally, the mean length of the particles is a good estimator of its physical properties. This work gives growing media manufacturers keys to better design their materials, and encourages to characterize the physical properties by studying particle morphology
Arbelot, Benoit. "Transferts d'apparence en espace image basés sur des propriétés texturelles". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM018/document.
Pełny tekst źródłaImage-space appearance manipulation techniques are widely used in various domains such as photography, biology, astronomy or performing arts. An image appearance depends on the image colors and texture, but also the perceived 3D informations such as shapes, materials and illumination. These characteristics also create a specific look and feel for the image, which is also part of the image appearance. The goal of image-space manipulation techniques is to modify colors and textures as a mean to alter perceived shapes, illumination, materials, and from this possibly alter the image look and feel.Appearance transfer methods are a specific type of manipulation techniques aiming to make the process more intuitive by automatically computing the image modification. In order to do so, they use an additional user-provided image depicting the desired appearance.In this thesis, we study image-space appearance transfer based on textural properties. Since textures are an integral part of the image appearance, guiding appearance transfers with textural information is an attractive approach. We first present a texture descriptor filtering framework to better preserve image edges and texture transitions in the texture analysis. We then use this framework coupled with different texture descriptors in order to apply local texture guided color transfer, colorization and texture transfer
Retornaz, Thomas. "Détection de textes enfouis dans des bases d’images généralistes : un descripteur sémantique pour l’indexation". Paris, ENMP, 2007. http://www.theses.fr/2007ENMP1511.
Pełny tekst źródłaMultimedia data bases, both personal and professional, are continuously growing and the need for automatic solutions becomes mandatory. Effort devoted by the research community to content-based image indexing is also growing, but the semantic gap is difficult to cross: the low level descriptors used for indexing are not efficient enough for an ergonomic manipulation of big and generic image data bases. The text present in a scene is usually linked to image semantic context and constitutes a relevant descriptor for content-based image indexing. In this thesis we present an approach to automatic detection of text from natural scenes, which tends to handle the text in different sizes, orientations, and backgrounds. The system uses a non linear scale space based on the ultimate opening operator (a morphological numerical residue). In a first step, we study the action of this operator on real images, and propose solutions to overcome these intrinsic limitations. In a second step, the operator is used in a text detection framework which contains additionally various tools of text categorisation. The robustness of our approach is proven on two different dataset. First we took part to ImagEval evaluation campaign and our approach was ranked first in the text localisation contest. Second, we produced result (using the same framework) on the free ICDAR dataset, the results obtained are comparable with those of the state of the art. Lastly, a demonstrator was carried out for EADS. Because of confidentiality, this work could not be integrated into this manuscript
Jiang, Zhifan. "Évaluation des mobilités et modélisation géométrique du système pelvien féminin par analyse d’images médicales". Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10003/document.
Pełny tekst źródłaThe better treatment of female pelvic mobility disorders has a social impact affecting particularly aged women. It is in this context that this thesis focuses on the development of methods in medical image analysis, for the evaluation of pelvic mobility and the geometric modeling of the pelvic organs. For this purpose, we provide solutions based on the registration of deformable models on Magnetic Resonance Images (MRI). All the results are able to detect the shape and quantify the movement of a part of the organs and to reconstruct their surfaces from patient-specific MRI. This work facilitates the simulation of the behavior of the pelvic organs using finite element method. The objective of these developed tools is to help to better understand the mechanism of the pathologies. They will finally allow to better predict the presence of certain diseases, as well as make surgical procedures more accurate and personalized
Lasmar, Nour-Eddine. "Modélisation stochastique pour l’analyse d’images texturées : approches Bayésiennes pour la caractérisation dans le domaine des transformées". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14639/document.
Pełny tekst źródłaIn this thesis we study the statistical modeling of textured images using multi-scale and multi-orientation representations. Based on the results of studies in neuroscience assimilating the human perception mechanism to a selective spatial frequency scheme, we propose to characterize textures by probabilistic models of subband coefficients.Our contributions in this context consist firstly in the proposition of probabilistic models taking into account the leptokurtic nature and the asymmetry of the marginal distributions associated with a textured content. First, to model analytically the marginal statistics of subbands, we introduce the asymmetric generalized Gaussian model. Second, we propose two families of multivariate models to take into account the dependencies between subbands coefficients. The first family includes the spherically invariant processes that we characterize using Weibull distribution. The second family is this of copula based multivariate models. After determination of the copula characterizing the dependence structure adapted to the texture, we propose a multivariate extension of the asymmetric generalized Gaussian distribution using Gaussian copula. All proposed models are compared quantitatively using both univariate and multivariate statistical goodness of fit tests. Finally, the last part of our study concerns the experimental validation of the performance of proposed models through texture based image retrieval. To do this, we derive closed-form metrics measuring the similarity between probabilistic models introduced, which we believe is the third contribution of this work. A comparative study is conducted to compare the proposed probabilistic models to those of the state-of-the-art
Fiot, Jean-Baptiste. "Méthodes mathématiques d’analyse d’image pour les études de population transversales et longitudinales". Thesis, Paris 9, 2013. http://www.theses.fr/2013PA090053/document.
Pełny tekst źródłaIn medicine, large scale population analysis aim to obtain statistical information in order to understand better diseases, identify their risk factors, develop preventive and curative treatments and improve the quality of life of the patients.In this thesis, we first introduce the medical context of Alzheimer’s disease, recall some concepts of statistical learning and the challenges that typically occurwhen applied in medical imaging. The second part focus on cross-sectional studies,i.e. at a single time point. We present an efficient method to classify white matter lesions based on support vector machines. Then we discuss the use of manifoldlearning techniques for image and shape analysis. Finally, we present extensions ofLaplacian eigenmaps to improve the low-dimension representations of patients usingthe combination of imaging and clinical data. The third part focus on longitudinalstudies, i.e. between several time points. We quantify the hippocampus deformations of patients via the large deformation diffeomorphic metric mapping frameworkto build disease progression classifiers. We introduce novel strategies and spatialregularizations for the classification and identification of biomarkers
Morio, Jérôme. "Analyse d’images PolInSAR à l’aide de techniques statistiques et issues de la théorie de l’information". Aix-Marseille 3, 2007. http://www.theses.fr/2007AIX30052.
Pełny tekst źródłaHigh resolution airborne sensors (SAR) like RAMSES operated by the French Aerospace Lab (ONERA) are able to acquire multicomponent images PolInSAR with polarimetric and/or interferometric information on the scene lightened by the radar. This type of images has thus notably some environmental and agricultural applications (culture follow-up, forest height estimation). The complexity of PolInSAR images implies the implementation of elaborated methods based on statistics and on information theory (image partition technique based on the minimization of the stochastic complexity, Shannon entropy, Bhattacharyya distance) in order to estimate the contributions of radiometry, polarimetry and interferometry for the soil characterization and to determine the system components that bring the most useful information depending on its applications
Aknoun, Sherazade. "Analyse quantitative d’images de phase obtenues par interféromètrie à décalage quadri-latéral. Applications en biologie". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4358/document.
Pełny tekst źródłaThe aim of this thesis, dedicated to the study and quantitative analysis of phase images obtained thanks to quadri-wave lateral shearing interferometry, is to caracterize a metrological tool and its three proposed different applications.This work has been done in collaboration between Institut Fresnel (Marseille, France) and Phasics company (Palaiseau, France) and continues that of Pierre Bon who has been in charge the application this technique to microscopy. This interferometric technique, developped by Phasics, for optical metrology and lasers characterization, allows to record complex eletromagnetic field maps thanks to a wave front measurement. By using it in the microscope image plane, one can obtain inetnsity and optical path difference images of a semi-transparent biological sample. this technique is now considered as a new quantitative phase contrast technique.The first part of this manuscript will be a state of the art of quantitative microscopy techniques. The issues of quantification and its meanings in the framework of different fluorescent and phase based techniques will be discussed.A description of the technique that is used and its comparison with similar phase techniques will be done.The measurement, under the projective approximation, is studied leading to different variables. We show different applications concerning isotropic elements in a first part and anisotropic elements in the second one.We show how this measurement is trnasposed to the third dimensions allowing three dimensional imaging and complete reconstruction of refractive index maps of biological samples
Anger, Jérémy. "Une exploration du déflouage d’images et vidéos : les détails qui font la différence". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN019.
Pełny tekst źródłaThis thesis studies the problem of image and video blur and its removal.In the first part we focus on the restoration of image bursts, in particular for deblurring and super-resolution.First we study Fourier Burst Accumulation, which efficiently fuses the frames temporally by a weighted average in the Fourier domain.Then we show that the recent advances in satellite design allow to increase the spatial resolution using multi-frame super-resolution algorithms.We propose a method based on a spline interpolation model and quantify the gain of resolution. We apply the method with success on raw Skysat image bursts lent by Planet.In the second part we focus on the non-blind deconvolution problem.While most methods assume an over-simplistic image formation model, we propose to explicitly handle saturation, quantization, and gamma correction, with considerable improvement of the results.In the third part we tackle the difficult problem of blind deblurring, where the blur kernel is not known.First we propose an anatomy of the Goldstein and Fattal method which models statistical irregularities in the power spectrum of blurred natural images in order to estimate a blur kernel.Then we analyze a blur kernel estimation method that uses an L0 prior on the image gradients.While the method performs well on ideal settings, we show that its performance degrades rapidly under high noise conditions.To cope with this issue, we propose improvements of the method in order to handle high noise levels while maintaining its efficiency.The proposed approach yields results that are equivalent to those obtained with computationally far more demanding methods.Finally we propose to quantify the sharpness of images from the PlanetScope constellation to discard low quality images or deconvolve blurry ones
Ramadout, Benoit. "Capteurs d’images CMOS à haute résolution à Tranchées Profondes Capacitives". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10068.
Pełny tekst źródłaCMOS image sensors showed in the last few years a dramatic reduction of pixel pitch. However pitch shrinking is increasingly facing crosstalk and reduction of pixel signal, and new architectures are now needed to overcome those limitations. Our pixel with Capacitive Deep Trench Isolation and Vertical Transfer Gate (CDTI+VTG) has been developed in this context. Innovative integration of polysilicon-filled deep trenches allows high-quality pixel isolation, vertically extended photodiode and deep vertical transfer ability. First, specific process steps have been developed. In parallel, a thorough study of pixel MOS transistors has been carried out. We showed that capacitive trenches can be also operated as extra lateral gates, which opens promising applications for a multi-gate transistor compatible with CMOS-bulk technology. Finally, a 3MPixel demonstrator integrating 1.75*1.75 μm² pixels has been realized in a CMOS 120 nm technology. Pixel performances could be measured and exploited. In particular, a low dark current level could be obtained thanks to electrostatic effect of capacitive isolation trenches
Dufour, John-Eric. "Mesures de forme, de déplacement, et de paramètres mécaniques parstéréo-corrélation d’images isogéométrique". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLN004/document.
Pełny tekst źródłaThis thesis is dedicated to measurement of 3D shapes, 3D kinematic fields on surfaces and identification of mechanical properties from digital image correlation measurements. This optical method uses cameras as measurement tools. For this reason, a study of the classical camera models used is performed and the description of the digitalization of an image from a continuous to a discrete formalism using the pixel is described. A specific work is dedicated to optical distortions and a method based on digital image correlation to evaluate these distortions is developped.A new method for 3D shapes and 3D displacement fields on surfaces using stereo-correlation is introduced. A numerical description of the observed object is used as a support to perform the correlation. This method lead to a global approach to stereo-correlation. It can be rewrite in a generic case or in particular to be applied to NURBS (Non-Uniform Rational B-Splines) surfaces. The displacement fields is therefore expressed in a NURBS formalism which is completely consistent with the geometrical model used to described the observed shape. Measurements are validated using prescribed motions on a Bezier patch. The feasibility of such a technique in several industrial cases is then studied with for example the measurement of the displacement of a composite part of a landing gear under mechanical loading.Finally, from this isogeometric formulation of full-field measurement, a study of the identification of elastic properties is performed. Two methods inspired from existing approaches are proposed, using full-field measurement and numerical simulations in a common isogeometric formalism to identify parameters of an isotropic linear elastic constitutive law on a both a numerical test case and a uniaxial tensile test
Hostache, Renaud. "Analyse d’images satellitaires d’inondations pour la caractérisation tridimensionnelle de l’alea et l’aide à la modélisation hydraulique". Paris, ENGREF, 2006. https://pastel.archives-ouvertes.fr/pastel-00002016.
Pełny tekst źródłaThe Thesis aims at deploying methods of flood satellite image analysis beyond 2D flood area delineation in order to estimate water levels and to help hydraulic modelling. Based on Raclot (2003) works with aerial photographs, which provide ±20cm mean uncertainty, the water level estimation method uses satellite RADAR images of flood and a fine DEM. The method is composed of two steps : i) flood cartography and analysis of image hydraulic relevance for water level estimation, ii) fusion between relevant information resulting from the image with a fine Digital Elevation Model (DEM) and constraining the water levels extracted from image by concepts of coherence with respect to a hydraulic flow through a plain. It provides water level estimations with a ±38cm mean uncertainty for a RADARSAT-1 image of a Mosel Flood (1997, France). In addition, validation works with an ENVISAT image of an Alzette river flood (Luxembourg, 2003) allowed us to calculate a Root Mean Square Error of 13 cm on the estimates of water levels. To help hydraulic modelling, the PhD aims at reducing equifinality thanks to satellite images of flood. To meet this aim, a "traditional" step of calibration thanks to hydrographs is completed by comparison between simulation results and flood extends or water levels extracted from images. To deals with calibration uncertainties, Monte-Carlo simulations have been used. In perspective, the results of the thesis imply great benefits for flood evolution forecasting after acquisition of flood satellite images because the use of these images as initial conditions or calibration data provide better-constrained models
Shen, Kaikai. "Automatic segmentation and shape analysis of human hippocampus in Alzheimer's disease". Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS072/document.
Pełny tekst źródłaThe aim of this thesis is to investigate the shape change in hippocampus due to the atrophy in Alzheimer’s disease (AD). To this end, specific algorithms and methodologies were developed to segment the hippocampus from structural magnetic resonance (MR) images and model variations in its shape. We use a multi-atlas based segmentation propagation approach for the segmentation of hippocampus which has been shown to obtain accurate parcellation of brain structures. We developed a supervised method to build a population specific atlas database, by propagating the parcellations from a smaller generic atlas database. Well segmented images are inspected and added to the set of atlases, such that the segmentation capability of the atlas set may be enhanced. The population specific atlases are evaluated in terms of the agreement among the propagated labels when segmenting new cases. Compared with using generic atlases, the population specific atlases obtain a higher agreement when dealing with images from the target population. Atlas selection is used to improve segmentation accuracy. In addition to the conventional selection by image similarity ranking, atlas selection based on maximum marginal relevance (MMR) re-ranking and least angle regression (LAR) sequence are developed for atlas selection. By taking the redundancy among atlases into consideration, diversity criteria are shown to be more efficient in atlas selection which is applicable in the situation where the number of atlases to be fused is limited by the computational resources. Given the segmented hippocampal volumes, statistical shape models (SSMs) of hippocampi are built on the samples to model the shape variation among the population. The correspondence across the training samples of hippocampi is established by a groupwise optimization of the parameterized shape surfaces. The spherical parameterization of the hippocampal surfaces are flatten to facilitate the reparameterization and interpolation. The reparameterization is regularized by viscous fluid, which is solved by a fast implementation based on discrete sine transform. In order to use the hippocampal SSM to describe the shape of an unseen hippocampal surface, we developed a shape parameter estimator based on the expectationmaximization iterative closest points (EM-ICP) algorithm. A symmetric data term is included to achieve the inverse consistency of the transformation between the model and the shape, which gives more accurate reconstruction of the shape from the model. The shape prior modeled by the SSM is used in the maximum a posteriori estimation of the shape parameters, which is shown to enforce the smoothness and avoid the effect of over-fitting. In the study of the hippocampus in AD, we use the SSM to model the hippocampal shape change between the healthy control subjects and patients diagnosed with AD. We identify the regions affected by the atrophy in AD by assessing the spatial difference between the control and AD groups at each corresponding landmark. Localized shape analysis is performed on the regions exhibiting significant inter-group difference, which is shown to improve the discrimination ability of the principal component analysis (PCA) based SSM. The principal components describing the localized shape variability among the population are also shown to display stronger correlation with the decline of episodic memory scores linked to the pathology of hippocampus in AD
Baudin, Pierre-Yves. "De la segmentation au moyen de graphes d’images de muscles striés squelettiques acquises par RMN". Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2013. http://www.theses.fr/2013ECAP0033/document.
Pełny tekst źródłaSegmentation of magnetic resonance images (MRI) of skeletal striated muscles is of crucial interest when studying myopathies. Diseases understanding, therapeutic followups of patients, etc. rely on discriminating the muscles in MRI anatomical images. However, delineating the muscle contours manually is an extremely long and tedious task, and thus often a bottleneck in clinical research. Typical automatic segmentation methods rely on finding discriminative visual properties between objects of interest, accurate contour detection or clinically interesting anatomical points. Skeletal muscles show none of these features in MRI, making automatic segmentation a challenging problem. In spite of recent advances on segmentation methods, their application in clinical settings is difficult, and most of the times, manual segmentation and correction is still the only option. In this thesis, we propose several approaches for segmenting skeletal muscles automatically in MRI, all related to the popular graph-based Random Walker (RW) segmentation algorithm. The strength of the RW method relies on its robustness in the case of weak contours and its fast and global optimization. Originally, the RW algorithm was developed for interactive segmentation: the user had to pre-segment small regions of the image – called seeds – before running the algorithm which would then complete the segmentation. Our first contribution is a method for automatically generating and labeling all the appropriate seeds, based on a Markov Random Fields formulation integrating prior knowledge of the relative positions, and prior detection of contours between pairs of seeds. A second contribution amounts to incorporating prior knowledge of the shape directly into the RW framework. Such formulation retains the probabilistic interpretation of the RW algorithm and thus allows to compute the segmentation by solving a large but simple sparse linear system, like in the original method. In a third contribution, we propose to develop a learning framework to estimate the optimal set of parameters for balancing the contrast term of the RW algorithm and the different existing prior models. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the medical images, instead of the optimal probabilistic segmentation, which corresponds to the desired output of the RW algorithm. We overcome this challenge by treating the optimal probabilistic segmentation as a latent variable. This allows us to employ the latent Support Vector Machine (latent SVM) formulation for parameter estimation. All proposed methods are tested and validated on real clinical datasets of MRI volumes of lower limbs
Akl, Adib. "Analyse / synthèse de champs de tenseurs de structure : application à la synthèse d’images et de volumes texturés". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0009/document.
Pełny tekst źródłaThis work is a part of the texture synthesis context. Aiming to ensure a faithful reproduction of the patterns and variations of orientations of the input texture, a two-stage structure/texture synthesis algorithm is proposed. It consists of synthesizing the structure layer showing the geometry of the exemplar and represented by the structure tensor field in the first stage, and using the resulting tensor field to constrain the synthesis of the texture layer holding more local variations, in the second stage. An acceleration method based on the use of Gaussian pyramids and parallel computing is then developed.In order to demonstrate the ability of the proposed algorithm to faithfully reproduce the visual aspect of the considered textures, the method is tested on various texture samples and evaluated objectively using statistics of 1st and 2nd order of the intensity and orientation field. The obtained results are of better or equivalent quality than those obtained using the algorithms of the literature. A major advantage of the proposed approach is its capacity in successfully synthesizing textures in many situations where traditional algorithms fail to reproduce the large-scale patterns.The structure/texture synthesis approach is extended to color texture synthesis. 3D texture synthesis is then addressed and finally, an extension to the synthesis of specified form textures using an imposed texture is carried out, showing the capacity of the approach in generating textures of arbitrary forms while preserving the input texture characteristics
Rivollier, Séverine. "Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux". Thesis, Saint-Etienne, EMSE, 2010. http://www.theses.fr/2010EMSE0575/document.
Pełny tekst źródłaMinkowski functionals define set topological and geometrical measurements, insufficient for the characterization, because different sets may have the same functionals. Thus, other shape functionals, geometrical and morphometrical are used. A shape diagram, defined thanks to two morphometrical functionals, provides a representation allowing the study of set shapes. In quantitative image analysis, these functionals and diagrams are often limited to binary images and achieved in a global and monoscale way. The General Adaptive Neighborhoods (GANs) simultaneously adaptive with the analyzing scales, the spatial structures and the image intensities, enable to overcome these limitations. The GAN-based Minkowski functionals are introduced, which allow a gray-tone image analysis to be realized in a local, adaptive and multiscale way.The GANs, defined around each point of the spatial support of a gray-tone image, are homogeneous with respect to an analyzing criterion function represented in an algebraic model, according to an homogeneity tolerance. The shape functionals computed on the GAN of each point of the spatial support of the image, define the so-called GAN-based shape maps. The map histograms and diagrams provide statistical distributions of the shape of the gray-tone image local structures, contrary to the classical histogram that provides a global distribution of image intensities. The impact of axiomatic criteria variations is analyzed through these maps, histograms and diagrams. Thus, multiscale maps are built, defining GAN-based shape functions
Bayoudh, Meriam. "Apprentissage de connaissances structurelles à partir d’images satellitaires et de données exogènes pour la cartographie dynamique de l’environnement amazonien". Thesis, Antilles-Guyane, 2013. http://www.theses.fr/2013AGUY0671/document.
Pełny tekst źródłaClassical methods for satellite image analysis are inadequate for the current bulky data flow. Thus, automate the interpretation of such images becomes crucial for the analysis and management of phenomena changing in time and space, observable by satellite. Thus, this work aims at automating land cover cartography from satellite images, by expressive and easily interpretable mechanism, and by explicitly taking into account structural aspects of geographic information. It is part of the object-based image analysis framework, and assumes that it is possible to extract useful contextual knowledge from maps. Thus, a supervised parameterization methods of a segmentation algorithm is proposed. Secondly, a supervised classification of geographical objects is presented. It combines machine learning by inductive logic programming and the multi-class rule set intersection approach. These approaches are applied to the French Guiana coastline cartography. The results demonstrate the feasibility of the segmentation parameterization, but also its variability as a function of the reference map classes and of the input data. Yet, methodological developments allow to consider an operational implementation of such an approach. The results of the object supervised classification show that it is possible to induce expressive classification rules that convey consistent and structural information in a given application context and lead to reliable predictions, with overall accuracy and Kappa values equal to, respectively, 84,6% and 0,7. In conclusion, this work contributes to the automation of the dynamic cartography from remotely sensed images and proposes original and promising perpectives
Farbos, Baptiste. "Structure et propriétés de carbones anisotropes par une approche couplant analyse d’image et simulation atomistique". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0331/document.
Pełny tekst źródłaCombined images analysis/synthesis techniques and atomistic simulation methods have allowed studying the nanostructure/-texture of anisotropic dense carbons of the highly textured laminar pyrocarbon (PyC) type.Atomic representations of an as-prepared (AP) rough laminar PyC as well as a regenerative laminar PyC AP and after several heat treatments (HT) were reconstructed to better characterize these materials.The models contain nanosized graphene domains connected between them by line defects formed by pairs of rings with 5 and 7 carbons inside layers and by screw dislocations and fourfold atoms between layers. The most ordered models have larger domains and a lower percentage of connections between the layers.Mechanical and thermal properties predicted from these models are close to those of graphite and increase with the coherence inside layers and the density of connections between layers.Models of polycrystalline graphene were also generated, showing structure and mechanical properties very close to those of the carbon layers extracted from PyCs. The structural reorganization occurring during the HT of such materials was studied: thinning of line defects and vacancy healing were observed. This represents a first step towards the study of the graphitization of PyCs.The reconstruction method was eventually adapted to study the structural evolution of a nuclear-grade graphite during its irradiation by electrons, allowing us to observe how defects are created and propagate during irradiation
Leclaire, Arthur. "Champs à phase aléatoire et champs gaussiens pour la mesure de netteté d’images et la synthèse rapide de textures". Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05S002/document.
Pełny tekst źródłaThis thesis deals with the Fourier phase structure of natural images, and addresses no-reference sharpness assessment and fast texture synthesis by example. In Chapter 2, we present several models of random fields in a unified framework, like the spot noise model and the Gaussian model, with particular attention to the spectral representation of these random fields. In Chapter 3, random phase models are used to perform by-example synthesis of microtextures (textures with no salient features). We show that a microtexture can be summarized by a small image that can be used for fast and flexible synthesis based on the spot noise model. Besides, we address microtexture inpainting through the use of Gaussian conditional simulation. In Chapter 4, we present three measures of the global Fourier phase coherence. Their link with the image sharpness is established based on a theoretical and practical study. We then derive a stochastic optimization scheme for these indices, which leads to a blind deblurring algorithm. Finally, in Chapter 5, after discussing the possibility of direct phase analysis or synthesis, we propose two non random phase texture models which allow for synthesis of more structured textures and still have simple mathematical guarantees
Sadi, Ahcène. "Processus de certification de documents utilisant un authentifiant chaotique mesurable comme le Code à BullesTM par analyse d’images". Caen, 2013. http://www.theses.fr/2013CAEN2090.
Pełny tekst źródłaThis thesis focuses on the documents certification using the Bubble TagTM. The modalities presented by the Bubble TagTM are similar to those of biometrics. Based on that similarity, we present a new architecture of documents system authentication. In the first part, we were interested in the digital images preprocessing, which is a very delicate phase in the authentication systems. Mathematical morphology provides a wide range of operators to understand various problems of image processing. Morphological processes operators can be defined in terms of algebraic (discrete) sets or as partial differential equations (PDEs). In this context, we decided to work on a new approach using mathematical morphology based on partial differences equations (PdEs) on weighted graphs. This methodology allows us to generalize these two approaches to a non-local image processing. We have proposed a new class of shock filter based on PdEs. We demonstrated their effectiveness in industrial images, and we proposed a new approach to segment this kind of image using the shocks filters. In the second part, we were interested in authentication and identification algorithms for Bubble TagTM. We presented an overview of techniques for indexing data used in the multimedia database. We have presented an approach that exploits the M-TREE trees to generate an indexing for a Bubble TagTM identification system
Pons, Bernad Gemma. "Aide à l’interprétation d’images radar à ouverture synthétique : analyse conjointe des propriétés géométriques et radiométriques des images SAR". Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30013.
Pełny tekst źródłaThe work of this thesis is part of the research efforts that are currently being undertaken on segmentation and classification to ease radar images interpretation. Our thesis contributes to this research by proposing a semi-automatic scene analysis approach to assist the interpretation of images acquired by a synthetic aperture radar (SAR). Mainly, it is focused on the application of segmentation methods to classification and object recognition problems. Its aim is to propose fast and simple methods, easily comprehensible by non-expert users in image processing. The proposed approach is a two-stage algorithm. First, a SAR image partition is obtained in a non-supervised manner by using a statistical active grid based on the minimization of the stochastic complexity. Then, discriminative features (statistics, geometrics and texture parameters) are calculated for each extracted region in order to classify them in a semi-supervised manner. A hierarchical approach is adopted. In practice, the proposed algorithm provides an initial land use classification as well as confidence measures for each region. This initial classification can be used as an aid to the image interpretation or as a source of information for further processing
Alves, Zapata José Rodolfo. "Modélisation des procédés de formage par impulsion magnétique". Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM010/document.
Pełny tekst źródłaMagnetic pulse forming is a technology that has gained interest in the last decades – thanks to the increased formability it offers for high-resistance-low weight ratio materials such as aluminum and magnesium alloys. One major complexity of the process lies in the design and study at the work piece level and the interaction between the several physical aspects involved: the electromagnetic waves as source of energy, the thermo-mechanics controlling the strain and stress evolution, as well as the study of fracture and damage under high-speed loading conditions. This work is dedicated to the development of a predictive model and computational tool able to deal with the interaction between the electromagnetism and the thermo-mechanics in a 3D finite elements frame work. We introduce the computational aspects of the electromagnetism, from the selected approach to include the geometry of the parts down to the coupling with the electric machinery behind the process. This is followed by the computational techniques needed to couple the electromagnetic computation to the thermo-mechanical one with a special focus on the problem of tracking the displacement of the deformable part within the electromagnetic module. We also introduce some aspects more related to the physics of the process such as the phenomena of elastic spring-back elimination and surface bonding (welding). In the last chapter we present the experimental facilities available at the laboratory. A methodology for identification of the electric parameters defining the machinery and needed to perform the simulation is introduced
Ledru, Yohann. "Etude de la porosité dans les matériaux composites stratifiés aéronautiques". Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT048G/document.
Pełny tekst źródłaLong fiber reinforced epoxy matrix composite laminate manufacturing process is divided into several stages. The most critical one is the polymerization stage. If not optimized, defects in the bulk material such as voids can occur. The aim of this work is to investigate the void formation and evolution processes in order to improve the thermoset laminates quality in minimizing the void ratio. Two phenomena causing void formation have been identified. The first is the mechanical entrapment of gas bubbles between prepreg plies during the lay up. Second is a thermodynamical one. Solvents and humidity absorbed by the prepreg during its manufacturing can be evaporated by increasing the temperature. Then, it has been shown that the vaccum bag lay up permeability in combination with the vaccum pressure could favour the gas washing out. In parallel, thermo-mechanical and diffusion models are coupled to obtain an accurate void size prediction along temperature and pressure applied during the polymerisation. In fact, these two parameters induce variations of the gas bubble radius inside resin. The first experimental results seem to validate qualitatively the calculated void size behaviour. Indeed, hydrostatic pressure imposed during polymerization plays a very important role on gas bubble shrinkage. Finally, a new experimental setup using image analyses has been developed to measure as accurate as possible the volume void ratio. Under specific conditions, stereology allows to extrapolate 2D results to 3D ones. Void ratios obtained with this method are in good agreement with acid digestion results. Complementary morphometric studies on void shapes have given new information about the heterogeneous void distribution in the specimen and also on the statistical void size distribution versus polymerization conditions
Jahel, Camille. "Analyse des dynamiques des agroécosystèmes par modélisation spatialisée et utilisation d’images satellitaires, Cas d’étude de l’ouest du Burkina Faso". Electronic Thesis or Diss., Paris, AgroParisTech, 2016. http://www.theses.fr/2016AGPT0059.
Pełny tekst źródłaRural areas of West Africa have seen notable transformations these last two decades, mainly due to high population growth, development policies in favor of export crops and introduction of new cropping practices. The results of these developments are a pressure on forestry resources, an evolution of farming systems, a depletion of soils and a saturation of cultivated areas. The number of conflicts for resources access increases, reviving buried ethnical tensions, and the question of food security is raised. In that context, early warning systems have been developed in order to foresee and curb food insecurity by the mean of hazard analyses.The present work deals with agrarian changes and their mechanisms, in the context of early warning systems development. New methodological approaches are explored, based on modeling and remote sensing in order to create a retrospective and prospective analysis of agrarian dynamics of the Tuy province, located in West Burkina Faso.We first focus on the issue of cross-scaling in agro-ecosystems dynamics models, by building a multi-scalar model of past developments. The model uses interaction graphs to simulate processes occurring from the plot scale to the regional scale (crop production, crop rotation and crop area expansion). We show that modelling across scales is achievable without resorting to methods of aggregation or disaggregation, usually applied for this type of study.The model is then used to analyze two aspects of agrarian dynamics of Tuy province. The first one deals with clearances dynamics in the context of Malthus vs Boserup debate, concerning the impacts of demographic growth on natural resources. Prospective scenarios are simulated and their consequences on natural vegetation surfaces are assessed: these scenarios simulate emigrations of a part of the population towards other areas, the implementation of protected areas, a demographic regulation and an ecological intensification of farming systems.The second aspect concerns decisional processes of farmers in order to constitute their crops rotations. The study consists in understanding the important variations of cultivated species, observed during the studied period, by analyzing the simulated weight evolution of different determining factors involved in the decisional processes.Finally, we show that anthropic processes footprints are explicitly detectable in remote sensing images, by using multi-scalar simulations of the model developed. Then, we create an assimilation of satellite data in the model in order to re-calibrate it and reinforce its abilities to reproduce past dynamics. This last part opens important perspectives concerning the joint use of remote sensing data and agro-ecosystems dynamics
Garbez, Morgan. "Construction de l'architecture et des composantes visuelles d'un buisson ligneux d'ornement : le rosier". Thesis, Rennes, Agrocampus Ouest, 2016. http://www.theses.fr/2016NSARB287/document.
Pełny tekst źródłaShrubs form a key plant model to meet social and environmentalconcerns. Usually transposed on the tree model, their architecturaldevelopment is still ill-known and understudied to address visualquality. To identify and anticipate such expectations, the visual qualitymanagement of ornamental plants through a multidisciplinarymethodology is proposed. It includes architecture of the plants withits phenotypic plasticity and their visual appearance perception.On a rose bush: Rosa hybrida L. ‘Radrazz’, this work shows howarchitectural analysis with its modeling tools, sensory evaluationand image analysis can form a coherent scientifi c framework toface up to such a purpose, and be transposed for other taxa. Onvirtual rose bushes, and real ones exposed to a light gradient, thevisual appearance can be characterized objectively by means ofsensory tests using rotating plant video at different stages.Thevideo stand enables a better mental representation of the plant 3Dby the subjects, leading to a more complete and reliable descriptionof the plant visual appearance; then to predict this descriptionthrough statistically integrated image analysis of multiple plantfacets. Some relevant architectural variables, with numerousequivalents, potentially interesting to study the architecturaldevelopment of bushes during their life cycle, enabled to predicteven explain how visual components were built for a cultivar. Fora better market responsiveness, this work lays the foundationfor drafting interactive decision and innovation support tools forb
Cottet‐Rousselle, Cécile. "Mesure par microscopie confocale du métabolisme mitochondrial et du niveau énergétique cellulaire au cours d’épisodes de carences en substrats et/ou en oxygène". Thesis, Paris, EPHE, 2016. http://www.theses.fr/2016EPHE3096/document.
Pełny tekst źródłaMitochondria form an information hub at the center of the cellular metabolism because of its physiological role consisting in the porduction of ATP from the degradation of porducts stemming from our food through the OXPHOS process. However, changes in the functionnig of the mitochondria can be responsible for numerous diseases. Among the different foms of metabolic stress leading to mitchondrial dysfunctions, ischemia-reperfusion can be found in numerous pathological situations. This work aims at developing a methodological approach based on confocal microscopy and image analysis to dissect –at cell level- the consequences of metabolic stress induced by episodes of deprivation in substrata associated or not with hypoxia or anoxia. Having developed the program of image analysis based on the « tophat » method, two approaches were designed to vizualize and quantify the mitochondrial function. The first one, combining TMRM labelling with NADH fluorescence made it possible to highlight some differences in the response to the stress caused by ischemia-reperfusion at the level of the respiratory chain or concerning the PTP opening in the four cellular types that were tested : HMEC-1, INS1, RT112 or pirmary heaptocyes. The second approach consisted in testing the use of biosensors designed to follow the variations of ATP concentration (ATeam) or the activation of AMPK (AMPKAR). The experimental conditions established in this work did not allow us to validate their use
Walbron, Amaury. "Analyse rapide d’images 3D de matériaux hétérogènes : identification de la structure des milieux et application à leur caractérisation multi-échelle". Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2015/document.
Pełny tekst źródłaDigital simulation is a wide-spreading tool for composite materials design and choice. Indeed it allows to generate and test digitally various structures more easily and quickly than with real manufacturing and tests processes. A feedback is needed following the choice and the fabrication of a virtual material in order to simultaneously validate the simulation and the fabrication process. With this aim, models similar to generated virtual structures are obtained by digitization of manufacturing materials. The same simulation algorithms can then be applied and allow to verify the forecasts. This thesis is also about the modelling of composite materials from 3D images, in order to rediscover in them the original virtual material. Image processing methods are applied to images to extract material structure data, i.e. each constituent localization, and orientation if applicable. These knowledge theoretically allow to simulate thermal and mechanical behavior of structures constituted of studied material. However to accurately represent composites requires practically a very small discretization step. Therefore behavior simulation of a macroscopic structure needs too much discretization points, and then time and memory. Hence a part of this thesis focuses also on determination of equivalent homogeneous material problem, which allows, when resolved, to lighten calculation time for simulation algorithms
Harroue, Benjamin. "Approche bayésienne pour la sélection de modèles : application à la restauration d’image". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0127.
Pełny tekst źródłaInversing main goal is about reconstructing objects from data. Here, we focus on the special case of image restauration in convolution problems. The data are acquired through a altering observation system and additionnaly distorted by errors. The problem becomes ill-posed due to the loss of information. One way to tackle it is to exploit Bayesian approach in order to regularize the problem. Introducing prior information about the unknown quantities osset the loss, and it relies on stochastic models. We have to test all the candidate models, in order to select the best one. But some questions remain : how do you choose the best model? Which features or quantities should we rely on ? In this work, we propose a method to automatically compare and choose the model, based on Bayesion decision theory : objectively compare the models based on their posterior probabilities. These probabilities directly depend on the marginal likelihood or “evidence” of the models. The evidence comes from the marginalization of the jointe law according to the unknow image and the unknow hyperparameters. This a difficult integral calculation because of the complex dependancies between the quantities and the high dimension of the image. That way, we have to work with computationnal methods and approximations. There are several methods on the test stand as Harmonic Mean, Laplace method, discrete integration, Chib from Gibbs approximation or the power posteriors. Comparing is those methods is significative step to determine which ones are the most competent in image restauration. As a first lead of research, we focus on the family of Gaussian models with circulant covariance matrices to lower some difficulties
Rohé, Marc-Michel. "Représentation réduite de la segmentation et du suivi des images cardiaques pour l’analyse longitudinale de groupe". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4051/document.
Pełny tekst źródłaThis thesis presents image-based methods for the analysis of cardiac motion to enable group-wise statistics, automatic diagnosis and longitudinal study. This is achieved by combining advanced medical image processing with machine learning methods and statistical modelling. The first axis of this work is to define an automatic method for the segmentation of the myocardium. We develop a very-fast registration method based on convolutional neural networks that is trained to learn inter-subject heart registration. Then, we embed this registration method into a multi-atlas segmentation pipeline. The second axis of this work is focused on the improvement of cardiac motion tracking methods in order to define relevant low-dimensional representations. Two different methods are developed, one relying on Barycentric Subspaces built on ref- erences frames of the sequence, and another based on a reduced order representation of the motion from polyaffine transformations. Finally, in the last axis, we apply the previously defined representation to the problem of diagnosis and longitudinal analysis. We show that these representations encode relevant features allowing the diagnosis of infarcted patients and Tetralogy of Fallot versus controls and the analysis of the evolution through time of the cardiac motion of patients with either cardiomyopathies or obesity. These three axes form an end to end framework for the study of cardiac motion starting from the acquisition of the medical images to their automatic analysis. Such a framework could be used for diagonis and therapy planning in order to improve the clinical decision making with a more personalised computer-aided medicine