Dissertations / Theses on the topic 'Analyse d'images de document'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Analyse d'images de document.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tochon, Guillaume. "Analyse hiérarchique d'images multimodales." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT100/document.
Full textThere is a growing interest in the development of adapted processing tools for multimodal images (several images acquired over the same scene with different characteristics). Allowing a more complete description of the scene, multimodal images are of interest in various image processing fields, but their optimal handling and exploitation raise several issues. This thesis extends hierarchical representations, a powerful tool for classical image analysis and processing, to multimodal images in order to better exploit the additional information brought by the multimodality and improve classical image processing techniques. %when applied to real applications. This thesis focuses on three different multimodalities frequently encountered in the remote sensing field. We first investigate the spectral-spatial information of hyperspectral images. Based on an adapted construction and processing of the hierarchical representation, we derive a segmentation which is optimal with respect to the spectral unmixing operation. We then focus on the temporal multimodality and sequences of hyperspectral images. Using the hierarchical representation of the frames in the sequence, we propose a new method to achieve object tracking and apply it to chemical gas plume tracking in thermal infrared hyperspectral video sequences. Finally, we study the sensorial multimodality, being images acquired with different sensors. Relying on the concept of braids of partitions, we propose a novel methodology of image segmentation, based on an energetic minimization framework
Chenoune, Yasmina. "Estimation des déformations myocardiques par analyse d'images." Thesis, Paris Est, 2008. http://www.theses.fr/2008PEST0014/document.
Full textThe work presented in this thesis is related to the cardiac images processing and the cardiac contractile function study, for a better comprehension of cardiac physiopathology and diagnosis. We implemented a method for the segmentation of the endocardial walls on standard MRI without tags. We used an approach based on the level set method, with a region-based formulation which gives satisfactory results on healthy and pathological cases. We proposed a practical method for the quantification of the segmental deformations in order to characterize the myocardial contractility. The method was clinically validated by the assesment of doctors and by comparison with the HARP method on tagget MRI. To improve the measurements precision, we proposed an iconic MRI/CT multimodal registration algorithm, using the maximization of the mutual information. We applied it to the localization of short-axis slices in CT volumes with good results. This work has as prospect its application to obtain high spatial and temporal resolutions CT sequences
Alsheh, Ali Maya. "Analyse statistique de populations pour l'interprétation d'images histologiques." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05S001/document.
Full textDuring the last decade, digital pathology has been improved thanks to the advance of image analysis algorithms and calculus power. However, the diagnosis from histopathology images by an expert remains the gold standard in a considerable number of diseases especially cancer. This type of images preserves the tissue structures as close as possible to their living state. Thus, it allows to quantify the biological objects and to describe their spatial organization in order to provide a more specific characterization of diseased tissues. The automated analysis of histopathological images can have three objectives: computer-aided diagnosis, disease grading, and the study and interpretation of the underlying disease mechanisms and their impact on biological objects. The main goal of this dissertation is first to understand and address the challenges associated with the automated analysis of histology images. Then it aims at describing the populations of biological objects present in histology images and their relationships using spatial statistics and also at assessing the significance of their differences according to the disease through statistical tests. After a color-based separation of the biological object populations, an automated extraction of their locations is performed according to their types, which can be point or areal data. Distance-based spatial statistics for point data are reviewed and an original function to measure the interactions between point and areal data is proposed. Since it has been shown that the tissue texture is altered by the presence of a disease, local binary patterns methods are discussed and an approach based on a modification of the image resolution to enhance their description is introduced. Finally, descriptive and inferential statistics are applied in order to interpret the extracted features and to study their discriminative power in the application context of animal models of colorectal cancer. This work advocates the measure of associations between different types of biological objects to better understand and compare the underlying mechanisms of diseases and their impact on the tissue structure. Besides, our experiments confirm that the texture information plays an important part in the differentiation of two implemented models of the same disease
Bertrand, Sarah. "Analyse d'images pour l'identification multi-organes d'espèces végétales." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2127/document.
Full textThis thesis is part of the ANR ReVeRIES, which aims to use mobile technologies to help people better understand their environment and in particular the plants that surround them. More precisely, the ReVeRIES project is based on a mobile application called Folia developed as part of the ANR ReVeS project and capable of recognising tree and shrub species based on photos of their leaves. This prototype differs from other tools in that it is able to simulate the behaviour of the botanist. In the context of the ReVeRIES project, we propose to go much further by developing new aspects: multimodal species recognition, learning through play and citizen science. The purpose of this thesis is to focus on the first of these three aspects, namelythe analysis of images of plant organs for identification.More precisely, we consider the main trees and shrubs, endemic or exotic, found in metropolitan France. The objective of this thesis is to extend the recognition algorithm by taking into account other organs in addition to the leaf. This multi-modality is indeed essential if we want the user to learn and practice the different methods of recognition for which botanists use the variety of organs (i.e. leaves, flowers, fruits and bark). The method used by Folia for leaf recognition being dedicated, because simulating the work of a botanist on the leaf, cannot be applied directly to other organs. Thus, new challenges are emerging, both in terms of image processing and data fusion.The first part of the thesis was devoted to the implementation of image processing methods for the identification of plant species. The identification of tree species from bark images was the first to be studied. The descriptors developed take into account the structure of the bark inspired from the criteria used by botanists. Fruits and flowers required a segmentation step before their description. A new segmentation method that can be used on smartphones has been developed to work in spite of the high variability of flowers and fruits. Finally, descriptors were extracted on fruits and flowers after the segmentation step. We decided not to separate flowers and fruits because we showed that a user new to botany does not always know the difference between these two organs on so-called "ornamental" trees (not fruit trees). For fruits and flowers, prediction is not only made on their species but also on their genus and family, botanical groups reflecting a similarity between these organs.The second part of the thesis deals with the combination of descriptors of the different organs: leaves, bark, fruits and flowers. In addition to basic combination methods, we propose to consider the confusion between species, as well as predictions of affiliations in botanical taxa higher than the species.Finally, an opening chapter is devoted to the processing of these images by convolutional neural networks. Indeed, Deep Learning is increasingly used in image processing, particularly for plant organs. In this context, we propose to visualize the learned convolution filters extracting information, in order to make the link between the information extracted by these networks and botanical elements
Foare, Marion. "Analyse d'images par des méthodes variationnelles et géométriques." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM043/document.
Full textIn this work, we study both theoretical and numerical aspects of an anisotropic Mumford-Shah problem for image restoration and segmentation. The Mumford-Shah functional allows to both reconstruct a degraded image and extract the contours of the region of interest. Numerically, we use the Amborsio-Tortorelli approximation to approach a minimizer of the Mumford-Shah functional. It Gamma-converges to the Mumford-Shah functional and allows also to extract the contours. However, the minimization of the Ambrosio-Tortorelli functional using standard discretization schemes such as finite differences or finite elements leads to difficulties. We thus present two new discrete formulations of the Ambrosio-Tortorelli functional using the framework of discrete calculus. We use these approaches for image restoration and for the reconstruction of normal vector field and feature extraction on digital data. We finally study another similar shape optimization problem with Robin boundary conditions. We first prove existence and partial regularity of solutions and then construct and demonstrate the Gamma-convergence of two approximations. Numerical analysis shows once again the difficulties dealing with Gamma-convergent approximations
Nguyen, Thanh Phuong. "Etude des courbes discrètes : applications en analyse d'images." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10095/document.
Full textIn this thesis, we are interested in the study of discrete curves and its applications in image analysis. We have proposed an amelioration of curvature estimation based on circumcircle. This method is based on the notion of blurred segment of width [nu] and on the decomposition of a curve into the sequence of maximal blurred segment of width [nu]. Afterwards, we have applied this idea in 3D to estimate the discrete curvature and torsion at each point of a 3D curve. Concerning the applications, we have developed a rapid et reliable method to detect dominant points of a 2D curve. A dominant point is a point whose the curvature value is locally maximum. The dominant points play an important role in pattern recognition. Our method uses a parameter: the width of maximal blurred segments. Based on this novel method of dominant point detection, we proposed free-parameter methods for polygonal representation. They are based on a multi-width approach. Otherwise, we are interested in discrete arcs and circles. A linear method has been proposed for the recognition of arcs and circles. We then develop a new method for segmentation of noisy curves into arcs based on a method of noise detection. We also proposed a linear method to measure the circularity of closed curves. In addition, we have proposed a robust method to decompose a curve into arcs and line segments
Harouna, Seybou Aboubacar. "Analyse d'images couleurs pour le contrôle qualité non destructif." Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2282/document.
Full textColor is a major criterion for many sectors to identify, to compare or simply to control the quality of products. This task is generally assumed by a human operator who performs a visual inspection. Unfortunately, this method is unreliable and not repeatable due to the subjectivity of the operator. To avoid these limitations, a RGB camera can be used to capture and extract the photometric properties. This method is simple to deploy and permits a high speed control. However, it's very sensitive to the metamerism effects. Therefore, the reflectance measurement is the more reliable solution to ensure the conformity between samples and a reference. Thus in printing industry, spectrophotometers are used to measure uniform color patches printed on a lateral band. For a control of the entire printed surface, multispectral cameras are used to estimate the reflectance of each pixel. However, they are very expensive compared to conventional cameras. In this thesis, we study the use of an RGB camera for the spectral reflectance estimation in the context of printing. We propose a complete spectral description of the reproduction chain to reduce the number of measurements in the training stages and to compensate for the acquisition limitations. Our first main contribution concerns the consideration of the colorimetric limitations in the spectral characterization of a camera. The second main contribution is the exploitation of the spectral printer model in the reflectance estimation methods
Séropian, Audrey. "Analyse de document et identification de scripteurs." Toulon, 2003. http://www.theses.fr/2003TOUL0010.
Full textThe axe of our study is limited to forms some areas of which are filled by cursive handwriting. After we elaborated a model of form used in a context of finite number of writers, our aim is here to create a process of identification of writer through fractal analysis of handwritten style writer. The identification of each writer depends on the extraction of a set of features that have to be intrinsic of the author of the document. The properties of autosimilarity in the handwriting are used. Some invariant patterns are extracted by the process of fractal compression to characterize the handwriting of the writer. These patterns are organized in a reference base to allow the analyse of an unknown handwriting through a Pattern matching process. The results of this analyze are evaluated by the signal to noise ratio. We can identify the text of a writer that we look for the identity of the author according to the set of reference bases
Amur, Khua Bux. "Contrôle adaptatif, techniques de régularisation et applications en analyse d'images." Thesis, Metz, 2011. http://www.theses.fr/2011METZ011S/document.
Full textAn adaptive control and regularization techniques are studied for some linear and nonlinear ill-posed problems in image processing and computer vision. These methods are based on variational approach which constitutes the minimization of a suitable energy functional made up of two parts data term obtained from constancy of grey value assumption and regularization term which cope with ill-posed problem by its filling in effect property. We have provided a novel adaptive approach for optic flow and stero vision problems. Which is based on adaptive finite element method using unstructutred grid as the domain of computation, which allow the automatic choice of local optimal regularization parameters. Various linear and nonlinear variational models have been considered in this thesis for scientific computations of optic flow and stereo vision problems and an efficient adaptive control is obtained. This novel regularization strategy is controlled by the regularization parameter [alpha] which depends on space and a posteriori error estimator called residual error indicator. This local adptive behavior has encouraged us to experiment with other problems in image analysis such as denoising, we add a preliminary chapter on hte model of Perona-Malik. This work falls in to the category novel and advanced numerical strategies for scientific computations specifically for image motion problems
Alvarez, padilla Francisco Javier. "AIMM - Analyse d'Images nucléaires dans un contexte Multimodal et Multitemporel." Thesis, Reims, 2019. http://www.theses.fr/2019REIMS017/document.
Full textThis work focuses on the proposition of cancerous tumor segmentation strategies in a multimodal and multitemporal context. Multimodal scope refers to coupling PET/CT data in order to jointly exploit both information sources with the purpose of improving segmentation performance. Multitemporal scope refers to the use of images acquired at different dates, which limits a possible spatial correspondence between them.In a first method, a tree is used to process and extract information dedicated to feed a random walker segmentation. A set of region-based attributes is used to characterize tree nodes, filter the tree and then project data into the image space for building a vectorial image. A random walker guided by vectorial tree data on image lattice is used to label voxels for segmentation.The second method is geared toward multitemporality problem by changing voxel-to-voxel for node-to-node paradigm. A tree structure is thus applied to model two hierarchical graphs from PET and contrast-enhanced CT, respectively, and compare attribute distances between their nodes to match those assumed similar whereas discarding the others.In a third method, namely an extension of the first one, the tree is directly involved as the data-structure for algorithm application. A tree structure is built on the PET image, and CT data is then projected onto the tree as contextual information. A node stability algorithm is applied to detect and prune unstable attribute nodes. PET-based seeds are projected into the tree to assign node seed labels (tumor and background) and propagate them by hierarchy. The uncertain nodes, with region-based attributes as descriptors, are involved in a vectorial random walker method to complete tree labeling and build the segmentation
Faivre, Adrien. "Analyse d'image hyperspectrale." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD075/document.
Full textThis dissertation addresses hyperspectral image analysis, a set of techniques enabling exploitation of micro-spectroscopy images. Images produced by these sensors constitute cubic arrays, meaning that every pixel in the image is actually a spectrum.The size of these images, which is often quite large, calls for an upgrade for classical image analysis algorithms.We start out our investigation with clustering techniques. The main idea is to regroup every spectrum contained in a hyperspectralimage into homogeneous clusters. Spectrums taken across the image can indeed be generated by similar materials, and hence display spectral signatures resembling each other. Clustering is a commonly used method in data analysis. It belongs nonetheless to a class of particularly hard problems to solve, named NP-hard problems. The efficiency of a few heuristics used in practicewere poorly understood until recently. We give theoretical arguments guaranteeing success when the groups studied displaysome statistical property.We then study unmixing techniques. The objective is no longer to decide to which class a pixel belongs, but to understandeach pixel as a mix of basic signatures supposed to arise from pure materials. The mathematical underlying problem is again NP-hard.After studying its complexity, and suggesting two lengthy relaxations, we describe a more practical way to constrain the problemas to obtain regularized solutions.We finally give an overview of other hyperspectral image analysis methods encountered during this thesis, amongst whomare independent component analysis, non-linear dimension reduction, and regression against a spectrum library
Nicodeme, Claire. "Evaluation de l'adhérence au contact roue-rail par analyse d'images spectrales." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM024/document.
Full textThe advantage of the train since its creation is in its low resistance to the motion, due to the contact iron-iron of the wheel on the rail leading to low adherence. However this low adherence is also a major drawback : being dependent on the environmental conditions, it is easily deteriorated when the rail is polluted (vegetation, grease, water, etc). Nowadays, strategies to face a deteriorated adherence impact the performance of the system and lead to a loss of transport capacity. The objective of the project is to use a new spectral imaging technology to identify on the rails areas with reduced adherence and their cause in order to quickly alert and adapt the train's behaviour. The study’s strategy took into account the three following points : -The detection system, installed on board of commercial trains, must be independent of the train. - The detection and identification process should not interact with pollution in order to keep the measurements unbiased. To do so, we chose a Non Destructive Control method. - Spectral imaging technology makes it possible to work with both spatial information (distance’s measurement, target detection) and spectral information (material detection and recognition by analysis of spectral signatures). In the assigned time, we focused on the validation of the concept by studies and analyses in laboratory, workable in the office at SNCF Ingénierie & Projets. The key steps were the creation of the concept's evaluation bench and the choice of a Vision system, the creation of a library containing reference spectral signatures and the development of supervised and unsupervised pixels classification. A patent describing the method and process has been filed and published
Corvo, Joris. "Caractérisation de paramètres cosmétologiques à partir d'images multispectrales de peau." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM100/document.
Full textThanks to its precision in spatial and spectral domain, multispectral imaging has become an essential tool in dermatology. This thesis focuses on the interest of this technology for cosmetological parameters assessment through three different studies: the detection of a foundation make-up, age assessment and roughness measurement.A database of multispectral skin images is build using a multiple optical filters system. A preprocessing step allows to standardize those texture images before their exploitation.Covariance matrices of mutispectral acquisitions can be displayed in a multidimensional scaling space which is a novel way to represent multivariate data sets. Likewise, a new dimensionality reduction algorithm based on PCA is proposed in this thesis.A complete study of the images texture is performed: texture features from mathematical morphology and more generally from image analysis are expanded to the case of multivariate images. In this process, several spectral distances are tested, among which a new distance associating the LIP model to the Asplund metric.Statistical predictions are generated from texture data. Thoses predictions lead to a conclusion about the data processing efficiency and the relevance of multispectral imaging for the three cosmetologic studies
Wang, Jinnian. "Caractérisation des sols par l'analyse d'images hyperspectrales en télédétection." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4051/document.
Full textHyperspectral remote sensing has been used successfully to identify and map abundances and compositional difference of mineral groups and single mineral phases. This research will toward developing a 3D mineral mapping system that integrate surface (airborne and satellite) and subsurface (drill core) hyperspectral remote sensing data and carries it into quantitative mineral systems analysis. The main content and result is introduced as follows:- For Surface mineralogy mapping, we have developed and optimized the processing methods for accurate, seamless mineral measurements using Airborne and Satellite hyperspectral image. This requires solutions for unmixing background effects from target minerals to leave residual scaled mineral abundances equivalent to vegetation-free pixels. Another science challenge is to improve the atmospheric correction. Also Hapke BRDF model is used on the study in the linear and nonlinear mineral spectral mixing models.- For the subsurface mineralogy mapping, we have developed Field Imaging Spectrometer System (FISS) and Drill Core Logging for the subsurface mineralogy mapping, the Key science challenges will be establishing the accuracy of derived mineral products through associated laboratory analysis, including investigations from SWIR into the thermal infrared for measuring minerals.- The 3D mineral maps derived from hyperspectral methods can distinctly improve our understanding of mineral system. We use GIS system integrating surface and subsurface mineralogy mapping, with 3D mineral models for demonstration exploitation of economic mineral deposits in test site
Delalandre, Mathieu. "Analyse des documents graphiques : une approche par reconstruction d'objets." Rouen, 2005. http://www.theses.fr/2005ROUES060.
Full textKhalid, Musaab. "Analyse de vidéos de cours d'eau pour l'estimation de la vitesse surfacique." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S019/document.
Full textThis thesis is an application of computer vision findings to river velocimetry research. Hydraulic research scientists already use various image processing techniques to process image sequences of rivers. The ultimate goal is to estimate free surface velocity of rivers remotely. As such, many risks related to intrusive river gauging techniques could be avoided. Towards this goal, there are two major issues need be addressed. Firstly, the motion of the river in image space need to be estimated. The second issue is related to how to transform this image velocity to real world velocity. Until recently, imagebased velocimetry methods impose many requirements on images and still need considerable amount of field work to be able to estimate rivers velocity with good accuracy. We extend the perimeter of this field by including amateur videos of rivers and we provide better solutions for the aforementioned issues. We propose a motion estimation model that is based on the so-called optical flow, which is a well developed method for rigid motion estimation in image sequences. Contrary to conventional techniques used before, optical flow formulation is flexible enough to incorporate physics equations that govern rivers motion. Our optical flow is based on the scalar transport equation and is augmented with a weighted diffusion term to compensate for small scale (non-captured) contributions. Additionally, since there is no ground truth data for such type of image sequences, we present a new evaluation method to assess the results. It is based on trajectory reconstruction of few Lagrangian particles of interest and a direct comparison against their manually-reconstructed trajectories. The new motion estimation technique outperformed traditional methods in image space. Finally, we propose a specialized geometric modeling of river sites that allows complete and accurate passage from 2D velocity to world velocity, under mild assumptions. This modeling considerably reduces the field work needed before to deploy Ground Reference Points (GRPs). We proceed to show the results of two case studies in which world velocity is estimated from raw videos
Faucheux, Cyrille. "Segmentation supervisée d'images texturées par régularisation de graphes." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4050/document.
Full textIn this thesis, we improve a recent image segmentation algorithm based on a graph regularization process. The goal of this method is to compute an indicator function that satisfies a regularity and a fidelity criteria. Its particularity is to represent images with similarity graphs. This data structure allows relations to be established between similar pixels, leading to non-local processing of the data. In order to improve this approach, combine it with another non-local one: the texture features. Two solutions are developped, both based on Haralick features. In the first one, we propose a new fidelity term which is based on the work of Chan and Vese and is able to evaluate the homogeneity of texture features. In the second method, we propose to replace the fidelity criteria by the output of a supervised classifier. Trained to recognize several textures, the classifier is able to produce a better modelization of the problem by identifying the most relevant texture features. This method is also extended to multiclass segmentation problems. Both are applied to 2D and 3D textured images
Chen, Yong. "Analyse et interprétation d'images à l'usage des personnes non-voyantes : application à la génération automatique d'images en relief à partir d'équipements banalisés." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080046/document.
Full textVisual information is a very rich source of information to which blind and visually impaired people (BVI) not always have access. The presence of images is a real handicap for the BVI. The transcription into an embossed image may increase the accessibility of an image to BVI. Our work takes into account the aspects of tactile cognition, the rules and the recommendations for the design of an embossed image. We focused our work on the analysis and comparison of digital image processing techniques in order to find the suitable methods to create an automatic procedure for embossing images. At the end of this research, we tested the embossed images created by our system with users with blindness. In the tests, two important points were evaluated: The degree of understanding of an embossed image; The time required for exploration.The results suggest that the images made by this system are accessible to blind users who know braille. The implemented system can be regarded as an effective tool for the creation of an embossed image. The system offers an opportunity to generalize and formalize the procedure for creating an embossed image. The system gives a very quick and easy solution.The system can process pedagogical images with simplified semantic contents. It can be used as a practical tool for making digital images accessible. It also offers the possibility of cooperation with other modalities of presentation of the image to blind people, for example a traditional interactive map
Diaz, Mauricio. "Analyse de l'illumination et des propriétés de réflectance en utilisant des collections d'images." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM051/document.
Full textThe main objective of this thesis is to exploit the photometric information avail- able in large photo collections of outdoor scenes to infer characteristics of the illumination, the objects and the cameras. To achieve this goal two problems are addressed. In a preliminary work, we explore opti- mal representations for the sky and compare images based on its appearance. Much of the information perceived in outdoor scenes is due to the illumination coming from the sky. The solar beams are reflected and refracted in the atmosphere, creating a global illumination ambiance. In turn, this environment determines the way that we perceive objects in the real world. Given the importance of the sky as an illumination source, we formulate a generic 3–step process in order to compare images based on its appearance. These three stages are: segmentation, modeling and comparing of the sky pixels. Different approaches are adopted for the modeling and comparing phases. Performance of the algorithms is validated by finding similar images in large photo collections. A second part of the thesis aims to exploit additional geometric information in order to deduce the photometric characteristics of the scene. From a 3D structure recovered using available multi–view stereo methods, we trace back the image formation process and estimate the models for the components involved on it. Since photo collections are usually acquired with different cameras, our formulation emphasizes the estimation of the radiometric calibration for all the cameras at the same time, using a strong prior on the possible space of camera response functions. Then, in a joint estimation framework, we also propose a robust computation of the global illumination for each image, the surface albedo for the 3D structure and the radiometric calibration for all the cameras
Handika, Nuraziz. "Multi-fissuration des structures en béton armé : analyse par corrélation d'images et modélisation." Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0001/document.
Full textThe modelling of cracking of reinforced concrete using the finite element method requires taking into account, in addition to the concrete damage, three phenomena: the specificity of the steel-concrete bond, the self-stress due to shrinkage, and the probabilistic scale effect due to the heterogeneity of concrete. This research is based on an experimental campaign to obtain the behaviour of the bond and the characteristics of the cracks on structural elements. The technique of digital image correlation was used to observe the spacing and opening of the cracks.The steel-concrete bond is considered in modelling using elastoplastic interface elements based on the experimental results of the pull-out tests. The effects of shrinkage are taken into account via a poro-mechanical framework. Finally, the probabilistic scale effect is integrated into the modelling via a random field method and then a weakest link one. Modelling is applied to the reinforced concrete structural element studied in the laboratory, which makes it possible to quantify the relative importance on cracking of the steel-concrete bond, the stresses induced by shrinkage, and the tensile strength heterogeneity of the material
Benjelil, Mohamed. "Analyse d'images de documents complexes et identification de scripts : cas des documents administratifs." La Rochelle, 2010. http://www.theses.fr/2010LAROS299.
Full textThis thesis describes our work in the field of multilingual multi-script complex document image segmentation: case of official documents. We proposed texture-based approach. Two different subjects are presented: (1) document image segmentation; (2) Arabic and Latin script identification in printed ant/ or handwriten types. The developed approaches concern the flow of documents that do not obey to a specific model. Chapter 1 presents the problematic and state of the complex document image segmentation and script identification. The work described in chapter 2 aimed at finding new models for complex multilingual multi-script document image segmentation. Algorythms have been developed for the segmentation of document images into homogeneous regions, identifying the script of textual blocs contained in document image and also can segment out a particular object in an image. The approach is based on classification on text and non text regions by mean of steerable pyramid features. Chapter 3 describes our work on official documents images segmentation based on steerable pyramid features. Chapter 4 describes our work on Arabic and Latin script identification in printed and/ or handwritten types. Experimental results shows that the proposed approaches perform consistently well on large sets of complex document images. Examples of application, performance tests and comparative studies are also presented
Nguyen, Thi Bich Thuy. "La programmation DC et DCA en analyse d'image : acquisition comprimée, segmentation et restauration." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0350/document.
Full textImage is one of the most important information in our lives. Along with the rapid development of digital image acquisition devices such as digital cameras, phone cameras, the medical imaging devices or the satellite imaging devices..., the needs of processing and analyzing images is more and more demanding. It concerns with the problem of image acquiring, storing, enhancing or extracting information from an image,... In this thesis, we are considering the image processing and analyzing problems including: compressed sensing, dictionary learning and image denoising, and image segmentation. Our method is based on deterministic optimization approach, named the DC (Difference of Convex) programming and DCA (Difference of Convex Algorithms) for solving some classes of image analysis addressed above. 1. Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, which is breaking the traditional limits of sampling theory of Nyquist–Shannon by finding solutions to underdetermined linear systems. This takes advantage of the signal’s sparseness or compressibility when it is represented in a suitable basis or dictionary, which allows the entire signal to be determined from few relative measurements. In this problem, we are interested in two aspects phases. The first one is finding the sparse representation of a signal. The other one is recovering the signal from its compressed measurements on an incoherent basis or dictionary. These problems lead to solve a NP–hard nonconvex optimization problem. We investigated three models with four approximations for each model. Appropriate algorithms based on DC programming and DCA are presented. 2. Dictionary learning: we have seen the power and the advantages of the sparse representation of signals in compressed sensing. Finding out the sparsest representation of a set of signals depends not only on the sparse representation algorithms but also on the basis or the dictionary used to represent them. This leads to the critical problems and other applications in a natural way. Instead of using a fixed basis such as wavelets or Fourier, one can learn the dictionary, a matrix D, to optimize the sparsity of the representation for a large class of given signals (data). The matrix D is called the learned dictionary. For this problem, we proposed an efficient DCA based algorithm including two stages: sparse coding and dictionary updating. An application of this problem, image denoising, is also considered. 3. Image segmentation: partitioning a digital image into multiple segments (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into a form that is more meaningful and easier to analyze. We have developed an efficient method for image segmentation via feature weighted fuzzy clustering model. We also study an application of image segmentation for cell counting problem in medicine. We propose a combination of segmentation phase and morphological operations to automatically count the number of cells. Our approach gives promising results in comparison with the traditional manual analysis in despite of the very high cell density
Etxegarai, Aldami Etxebarria Maddi. "Etude du couplage hydromécanique dans les roches par analyse d'images obtenues par tomographie neutronique." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI010/document.
Full textThe behaviour of subsurface-reservoir porous rocks is a central topic in resource engineering industry and has relevant applications for hydrocarbon and water production or CO2 sequestration. One of the key open issues is the effect of deformations on the hydraulic properties of the host rock, specifically in saturated environments. Deformation in geomaterials is rarely homogeneous because of the complex boundary conditions they undergo as well as for their intrinsic tendency to localise. This non uniformity of the deformation yields a non uniform permeability field, meaning that the traditional macroscopic analysis methods are outside their domain of validity. These methods are in fact based on measurements taken at the boundaries of a tested sample, under the assumption of internal homogeneity. At this stage, our understanding is in need of direct measurements of the local fluid permeability and its relationship with localiseddeformation.This doctoral dissertation focuses on the acquisition of such local data about the hydro mechanical properties of porous geomaterials in full-field, adopting neutron and x-ray tomography, as well as on the development of novel analysis methods. While x-ray imaging has been increasingly used in geo-sciences in the last few decades, the direct detection of fluid has been very limited because of the low air/water contrast within geomaterials. Unlike x-rays, neutrons are very sensitive to the hydrogen in the water because of their interaction with matter (neutrons interact with the atoms’ nuclei rather than with the external electron shell as x-rays do). This greater sensitivity to hydrogen provides a high contrast compared to the rock matrix, in neutron tomography images that facilitates the detection of hydrogen-rich fluids. Furthermore, neutrons are isotope-sensitive, meaning that water (H 2 0) and heavy water (D20), while chemically and hydraulically almost identical, can be easily distinguished in neutron imaging.The use of neutron imaging to investigate the hydromechanical properties of rocks is a substantially under-explored experimental area, mostly limited to 2D studies of dry, intact or pre-deformed samples, with little control of the boundary conditions. In thiswork we developed a new servocontrolled triaxial cell to perform multi-fluid flow experiments in saturated porous media, while performing in-situ loading and acquiring 4-dimensional neutron data.Another peculiarity of the project is the use of high-performance neutron imaging facilities (CONRAD-2, in Helmholtz Zentrum Berlin, and NeXT-Grenoble, in Institut Laue-Langevin), taking advantage of the world’s highest flux and cutting edge technology to acquire data at an optimal frequency for the study of this processes. The results of multiple experimental campaigns covering a series of initial and boundary conditions of increasing complexity are presented in this work.To quantify the local hydro-mechanical coupling, we applied a number of standard postprocessing procedures (reconstruction, denoising, Digital Volume Correlation) but also developed an array of bespoke methods, for example to track the water front andcalculate the 3D speed maps.The experimental campaigns performed show that the speed of the water front driven by imbibition in a dry sample is increased within a compactant shear band, while the pressure driven flow speed is decreased in saturated samples, regardless of the volumetric response of the shear band (compactant/dilatant). The 3D nature of the data and analyses has revealed essential in the characterization of the complex mechanical behaviour of the samples and the resultant flow speed.The experimental results obtained contribute to the understanding of flow in porous materials, ensure the suitability of the analysis and set an experimental method for further in-situ hydromechanical campaigns
Pierazzo, Nicola. "Quelque progrès en débruitage d'images." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN036/document.
Full textThis thesis explores the last evolutions on image denoising, and attempts to set a new and more coherent background regarding the different techniques involved. In consequence, it also presents a new image denoising algorithm with minimal artifacts and the best PSNR performance known so far.A first result that is presented is DA3D, a frequency-based guided denoising algorithm inspired form DDID [Knaus-Zwicker 2013]. This demonstrates that, contrarily to what was thought, frequency-based denoising can beat state-of-the-art algorithms without presenting artifacts. This algorithm achieves good results not only in terms of PSNR, but also (and especially) with respect to visual quality. DA3D works particularly well on enhancing the textures of the images and removing staircasing effects.DA3D works on top of another denoising algorithm, that is used as a guide, and almost always improve its results. In this way, frequency-based denoising can be applied on top of patch-based denoising algorithms, resulting on a hybrid method that keeps the strengths of both. The second result presented is Multi-Scale Denoising, a framework that allows to apply any denoising algorithm on a multi-scale fashion. A qualitative analysis shows that current denoising algorithms behave better on high-frequency noise. This is due to the relatively small size of patches and search windows currently used. Instead of enlarging those patches, that can cause other sorts of problems, the work proposes to decompose the image on a pyramid, with the aid of the Discrete Cosine Transformation. A quantitative study is performed to recompose this pyramid in order to avoid the appearance of ringing artifacts. This method removes most of the low-frequency noise, and improves both PSNR and visual results for smooth and textured areas.A third main issue addressed in this thesis is the evaluation of denoising algorithms. Experiences indicate that PSNR is not always a good indicator of visual quality for denoising algorithms, since, for example, an artifact on a smooth area can be more noticeable than a subtle change in a texture. A new metric is proposed to improve on this matter. Instead of a single value, a ``Smooth PNSR'' and a ``Texture PSNR'' are presented, to measure the result of an algorithm for those two types of image regions. We claim that a denoising algorithm, in order to be considered acceptable, must at least perform well with respect to both metrics. Following this claim, an analysis of current algorithms is performed, and it is compared with the combined results of the Multi-Scale Framework and DA3D.We found that the optimal solution for image denoising is the application of a frequency shrinkage, applied to regular regions only, while a multiscale patch based method serves as guide. This seems to resolve a long standing question for which DDID gave the first clue: what is the respective role of frequency shrinkage and self-similarity based methods for image denoising? We describe an image denoising algorithm that seems to perform better in quality and PSNR than any other based on the right combination of both denoising principles. In addition, a study on the feasibility of external denoising is carried, where images are denoised by means of a big database of external noiseless patches. This follows a work of Levin and Nadler, in 2011, that claims that state-of-the-art results are achieved with this method if a large enough database is used. In the thesis it is shown that, with some observation, the space of all patches can be factorized, thereby reducing the number of patches needed in order to achieve this result. Finally, secondary results are presented. A brief study of how to apply denoising algorithms on real RAW images is performed. An improved, better performing version of the Non-Local Bayes algorithm is presented, together with a two-step version of DCT Denoising. The latter is interesting for its extreme simplicity and for its speed
Laruelle, Élise Raphaëlle. "Vers une modélisation des grands plans d’organisation de l’embryon d'Arabidopsis thaliana." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS074/document.
Full textDuring embryonic development, the major body plans of the plant are implemented. This process gives rise to a mature embryo which possess all the characteristics of a young seedling and is associated with a morphological change. These two processes are stereotyped in Arabidopsis thaliana embryo development.Over the developmental process, the embryo shape switches from a globular form, with a radial symmetry, to a heart form with a bilateral symmetry. These changes are based on a differential cellular growth and on particular cell division plane orientation in the embryo, mechanisms that are tightly regulated and under control of molecular factors.If a number of cellular and molecular steps are known, the evolution of the symmetry and the acquisition of the specific embryo shape have not yet been explored.To understand the origin of the heart shape, we proceed to a detailed multi-scale description and quantification of embryo shape changes during embryo stages. We completed a collection of fixed embryo images with 35 embryos distributed along the embryo development over eight cell generations. The embryos have been digitized in 3D and cell segmented. From these images, embryo cell lineages have been reconstructed and their cell organizations characterized.The evolution of parameter measurements showed a progressive change of the shape. The change has begun at an early embryo stage where morphology still look like a globular form.To correlate the morphological change and the cells events, the division and the cell growth were inferred through measurements. The cell growth behavior changed in the globular embryo. Changes in the division behavior were also observed. The division plan orientations stopped to be stereotyped. Despite the variability, similar behaviors were observed over cell generations and also among precursor of tissues and organs of the embryo.The cell division behavior has been further analyzed by a search of the realized division rules which explain observations with a stochastic model of volume partition. A division rule based on a stochastic 3D surface area minimization has reproduced all observed division plane orientations depending on the volume repartition among daughter cells. The hypothesis of a stochastic division rule based on the cell geometry with a surface area minimization of surface passing through the mother cell centroid seemed to become apparent. But divisions in older cell generations suggested a progressive action of another factor on the division plane.The overall phenotyping the embryo early development should provide a framework for the analysis of molecular factors involved in the heart shape
Menguy, Pierre-Yves. "Suivi longitudinal des endoprothèses coronaires par analyse de séquences d'images de tomographie par cohérence optique." Thesis, Clermont-Ferrand 1, 2016. http://www.theses.fr/2016CLF1MM30/document.
Full textThis thesis deals with the segmentation and characterization of coronary arteries and stents in Optical Coherence Tomography (OCT) imaging. OCT is a very high resolution imaging that can appreciate fine structures such as the intimal layer of the vascular wall and stitches (struts). The objective of this thesis is to propose software tools allowing the automatic analysis of an examination with a runtime compatible with an intraoperative use. This work follows Dubuisson's thesis in OCT, which proposed a first formalism for light segmentation and strut detection for metal stents. We revisited the treatment chain for these two problems and proposed a preliminary method for detecting bioabsorbable polymer stents. Surface modeling of the stent made it possible to estimate a series of clinical indices from the diameters, surfaces and volumes measured on each section or on the entire examination. Applying the stent to the wall is also measured and visualized in 3D with an intuitive color scale. The arterial lumen is delineated using a Fast Marching short path search algorithm. Its originality is to exploit the image in the native helical form of the acquisition. For the detection of the metallic stent, the local maxima of the image followed by a shadow zone have been detected and characterized by a vector of attributes calculated in their neighborhood (relative value of the maximum, slope in gray level, symmetry ...). Peaks corresponding to struts were discriminated from the surrounding speckle by a logistic regression step with learning from a field truth constructed by an expert. A probability of belonging to the peaks to struts is constructed from the combination of attributes obtained. The originality of the method lies in the fusion of the probabilities of the close elements before applying a decision criterion also determined from the ground truth. The method was evaluated on a database of 14 complete examinations, both at the level of pixels and struts detected. We have also extensively validated a method of non-rigid registration of OCT images using bitters matched by an expert on the source and target exams. The objective of this registration is to be able to compare cut-to-cut examinations and indices calculated at the same positions at different acquisition times. The reliability of the strain model was evaluated on a corpus of forty-four pairs of OCT exams from a Leave-One-Out cross validation technique
He, Rui. "Évaluation d'une analyse voxel à voxel dans l'accident vasculaire cérébral à partir d'images IRM multiparamétriques." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS026/document.
Full textStroke is the leading cause of disability in adults. Beyond the narrow time window and possible risks of thrombolysis and mechanical thrombectomy, cell-therapies have strong potential. Reports showed that transplanted stem cells can enhance functional recovery after ischemic stroke in rodent models.To assess the mechanisms underlying the cell-therapy benefit after stroke, imaging is necessary. Multiparametric magnetic resonance imaging (MRI), including diffusion-weighted imaging (DWI) and perfusion-weighted imaging (PWI), has become the gold standard to evaluate stroke characteristics. MRI also plays an important role in the monitoring of cerebral tissue following stroke from the acute to the chronic phase. However, the spatial heterogeneity of each stroke lesion and its dynamic reorganization over time, which may be related to the effect of a therapy, remain a challenge for traditional image analysis techniques. To evaluate the effect of new therapeutic strategies, spatial and temporal lesion heterogeneities need to be more accurately characterized and quantified.The current image analysis techniques, based on mean values obtained from regions of interest (ROIs), hide the intralesional heterogeneity. Histogram-based techniques provide an evaluation of lesion heterogeneity but fail to yield spatial information. The parametric response map (PRM) is an alternative, voxel-based analysis technique, which has been established in oncology as a promising tool to better investigate parametric changes over time at the voxel level which concern the therapeutic response or prognosis of disease.The PhD project was divided into two parts: a preclinical and a clinical study. The goal of the first study was to evaluate the PRM analysis using MRI data collected after the intravenous injection of human mesenchymal stem cells (hMSCs) in an experimental stroke model. The apparent diffusion coefficient (ADC), cerebral blood volume (CBV) and vessel size index (VSI) were mapped using 7T MRI. Two analytic procedures, the standard whole-lesion approach and the PRM, were performed on data collected at 4 time points in transient middle cerebral artery occlusion (MCAo) models treated with either hMSC or vehicle and in sham animals. During the second PhD project, 6 MR parametric maps (diffusion and perfusion maps) were collected in 30 stroke patients (the ISIS / HERMES clinical trial). MRI data, analyzed with both a classic mean value and a PRM approaches, were correlated with the evaluation of functional recovery after stroke measured with the National Institutes of Health Stroke Scale (NIHSS) and the modified Rankin Scale (mRS) at 4 time points.In both studies, PRM analysis of MR parametric maps reveals fine changes of the lesion induced by a cell therapy (preclincal study) and correlate with long-term prognosis (clinical study).In conclusion, the PRM analysis could be used as an imaging biomarker of therapeutic efficacy and of prognostic biomarker of stroke patients
Journet, Nicholas. "Analyse d’images de documents anciens : une approche texture." La Rochelle, 2006. http://www.theses.fr/2006LAROS178.
Full textMy phd thesis subject is related to the topic of old documents images indexation. The corpus of old documents has specific characteristics. The content (text and image) as well as the layout information are strongly variable. Thus, it is not possible to work on this corpus such as it usually done with contemporary documents. Indeed, the first tests which we realised on the corpus of the “Centre d’Etude de la Renaissance”, with which we work, confirmed that the traditional approaches (driven –model approaches) are not very efficient because it’s impossible to put assumptions on the physical or logical structure of the old documents. We also noted the lack of tools allowing the indexing of large old documents images databases. In this phd work, we propose a new generic method which permits characterization of the contents of old documents images. This characterization is carried out using a multirésolution study of the textures contained in the images of documents. By constructing signatures related with the frequencies and the orientations of the various parts of a page it is possible to extract, compare or to identify different kind of semantic elements (reference letters, illustrations, text, layout. . . ) without making any assumptions about the physical or logical structure of the analyzed documents. These textures information are at the origin of creation of indexing tools for large databases of old documents images
Belarte, Bruno. "Extraction, analyse et utilisation de relations spatiales entre objets d'intérêt pour une analyse d'images de télédétection guidée par des connaissances du domaine." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD011/document.
Full textThe new remote sensors allow the acquisition of very high spatial resolution images at high speeds, thus producing alarge volume of data. Manual processing of these data has become impossible, new tools are needed to process them automatically. Effective segmentation algorithms are required to extract objects of interest of these images. However, the produced segments do not match to objects of interest, making it difficult to use expert knowledge.In this thesis we propose to change the level of interpretation of an image in order to see the objects of interest of the expert as objects composed of segments. For this purpose, we have implemented a multi-level learning process in order to learn composition rules. Such a composition rule can then be used to extract corresponding objects of interest.In a second step, we propose to use the composition rules learning algorithm as a first step of a bottom-up top-down approach. This processing chain aims at improving the classification from contextual knowledge and expert information.Composed objects of higher semantic level are extracted from learned rules or rules provided by the expert, and this new information is used to update the classification of objects at lower levels.The proposed method has been tested and validated on Pléiades images representing the city of Strasbourg. The results show the effectiveness of the composition rules learning algorithm to make the link between expert knowledge and segmentation, as well as the interest of the use of contextual information in the analysis of remotely sensed very high spatial resolution images
Pham, Chi-Hieu. "Apprentisage profond pour la super-résolution et la segmentation d'images médicales." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0124/document.
Full textIn this thesis, our motivation is dedicated to studying the behaviors of different image representations and developing a method for super-resolution, cross-modal synthesis and segmentation of medical imaging. Super-Resolution aims to enhance the image resolution using single or multiple data acquisitions. In this work, we focus on single image super-resolution (SR) that estimates the high-resolution (HR) image from one corresponding low-resolution (LR) image. Increasing image resolution through SR is a key to more accurate understanding of the anatomy. The applications of super-resolution have been shown that applying super-resolution techniques leads to more accurate segmentation maps. Sometimes, certain tissue contrasts may not be acquired during the imaging session because of time-consuming, expensive costor lacking of devices. One possible solution is to use medical image cross-modal synthesis methods to generate the missing subject-specific scans in the desired target domain from the given source image domain. The objective of synthetic images is to improve other automatic medical image processing steps such as segmentation, super-resolution or registration. In this thesis, convolutional neural networks are applied to super-resolution and cross-modal synthesis in the context of supervised learning. In addition, an attempt to apply generative adversarial networks for unpaired cross-modal synthesis brain MRI is described. Results demonstrate the potential of deep learning methods with respect to practical medical applications
Troya-Galvis, Andrès. "Approche collaborative et qualité des données et des connaissances en analyse multi-paradigme d'images de télédétection." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD040/document.
Full textAutomatic interpretation of very high spatial resolution remotely sensed images is a complex but necessary task. Object-based image analysis approaches are commonly used to deal with this kind of images. They consist in applying an image segmentation algorithm in order to construct the abjects of interest, and then classifying them using data-mining methods. Most of the existing work in this domain consider the segmentation and the classification independently. However, these two crucial steps are closely related. ln this thesis, we propose two different approaches which are based on data and knowledge quality in order to initialize, guide, and evaluate a segmentation and classification collaborative process. 1. The first approach is based on a mono-class extraction strategy allowing us to focus on the particular properties of a given thematic class in order to accurately label the abjects of this class. 2. The second approach deals with multi-class extraction and offers two strategies to aggregate several mono-class extractors to get a final and completely labelled image
Gimonet, Nicolas. "Identification de situations de conduite dégradées par analyse d'image de la chaussée." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLE007/document.
Full textAdverse lighting conditions and meteorological conditions are two critical issues for the road safety as they are subjected to jeopardize the driver perception of the environment. Thus, their detection and characterization with a vehicle embedded camera will be a key element to assist the driver. However, visibility is not the only risk for the driver, the road's state plays also a major role in driver's safety. Indeed, a wet road may cause on the one hand losses of grip and thus harming user's safety on road scene. On the other hand, it increases the risk of glaring in case of adverse lighting conditions. It's why wet road detection is very useful to improve motorist driving conditions. A wet road reflects more light than the dry one due to its optical properties which are modified by the presence of water film on its surface. In order to identify the actual road's state (dry or wet), we study the light quantity observed by a vehicle embedded camera. This study is based on bidirectional reflectance distribution function (BRDF) model. The BRDF model is combined with a sky model to give the road's luminance value for a given direction. The wet/dry road differentiation is based on the analyze of luminance values.The first step consists in implementing each model in order to generate synthetic images of dry and wet roads.In the second step these models are carried out on real road scene images acquired by an embedded camera in order to identify the actual road's state. The results can be used by others driver assistance systems, on the one hand, in order to suggest an appropriate speed limit and trajectory for the vehicle, thus improving driver's safety, on the other hand, in order to improve ADAS's reliability based on camera
Llucia, Ludovic. "Suivi d'objets à partir d'images issues de caméras mobiles non calibrées." Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22009/document.
Full textThis work refers to a 3D simulator that has for purpose to help football trainers interpreting tactical sequences based on real situations. This simulator has to be able to interpret video movies, to reconstruct a situation. The camera’s calibration state has to be as simple as possible. The first part of this document refers to the solution elaborated to implement this constraint whereas the second one is more oriented on the industrialisation process. These processes imply to focus on vision computing and ergonomics problems and to answer questions such as : how to characterize a homographic transformation matching the image and the model ? How to retrieve the position of the camera? Which area is part of the image? In an ergonomically point of view, the simulator has to reproduce the game play reality and to improve the abstraction and the communication of the coaches
Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.
Full textThe objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Rigaud, Christophe. "Segmentation and indexation of complex objects in comic book images." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS035/document.
Full textIn this thesis, we review, highlight and illustrate the challenges related to comic book image analysis in order to give to the reader a good overview about the last research progress in this field and the current issues. We propose three different approaches for comic book image analysis that are composed by several processing. The first approach is called "sequential'' because the image content is described in an intuitive way, from simple to complex elements using previously extracted elements to guide further processing. Simple elements such as panel text and balloon are extracted first, followed by the balloon tail and then the comic character position in the panel. The second approach addresses independent information extraction to recover the main drawback of the first approach : error propagation. This second method is called “independent” because it is composed by several specific extractors for each elements of the image without any dependence between them. Extra processing such as balloon type classification and text recognition are also covered. The third approach introduces a knowledge-driven and scalable system of comics image understanding. This system called “expert system” is composed by an inference engine and two models, one for comics domain and another one for image processing, stored in an ontology. This expert system combines the benefits of the two first approaches and enables high level semantic description such as the reading order of panels and text, the relations between the speech balloons and their speakers and the comic character identification
Pham, Ha Thai. "Analyse de "Time Lapse" optiques stéréo et d'images radar satellitaires : application à la mesure du déplacement de glaciers." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAA004/document.
Full textEarth observation by image acquisition systems allows the survey of temporal evolution of natural phenomena such as earthquakes, volcanoes or gravitational movements. Various techniques exist including satellite imagery, terrestrial photogrammetry and in-situ measurements. Image time series from automatic cameras (Time Lapse) are a growing source of information since they offer an interesting compromise in terms of spatial coverage and observation frequency in order to measure surface motion in specific areas. This PhD thesis is devoted to the analysis of image time series from terrestrial photography and satellite radar imagery to measure the displacement of Alpine glaciers. We are particularly interested in Time Lapse stereo processing problems for monitoring geophysical objects in unfavorable conditions for photogrammetry. We propose a single-camera processing chain that includes the steps of automatic photograph selection, coregistration and calculation of two-dimensional (2D) displacement field. The information provided by the stereo pairs is then processed using the MICMAC software to reconstruct the relief and get the three-dimensional (3D) displacement. Several pairs of synthetic aperture radar (SAR) images were also processed with the EFIDIR tools to obtain 2D displacement fields in the radar geometry in ascending or descending orbits. The combination of measurements obtained almost simultaneously on these two types of orbits allows the reconstruction of the 3D displacement. These methods have been implemented on time series of stereo pairs acquired by two automatic cameras installed on the right bank of the Argentière glacier and on TerraSAR-X satellite images covering the Mont-Blanc massif. The results are presented on data acquired during a multi-instrument experiment conducted in collaboration with the French Geographic National Institute (IGN) during the fall of 2013,with a network of Géocubes which provided GPS measurements. They are used to evaluate the accuracy of the results obtained by proximal and remote sensing on this type of glacier
Ozeré, Solène. "Modélisation mathématique de problèmes relatifs au recalage d'images." Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0010/document.
Full textThis work focuses on the modelling of problems related to image registration. Image registration consists in finding an optimal deformation such that a deformed image is aligned with a reference image. It is an important task encountered in a large range of applications such as medical imaging, comparison of data or shape tracking. The first chapter concerns the problem of topology preservation. This condition of topology preservation is important when the sought deformation reflects physical properties of the objects to be distorted. The following chapters propose several methods of image registration based on the nonlinear elasticity theory. Indeed, the objects to be matched are modelled as hyperelastic materials. Different fidelity terms have been investigated as well as two joint segmentation/registration models
Peschoud, Cécile. "Etude de la complémentarité et de la fusion des images qui seront fournies par les futurs capteurs satellitaires OLCI/Sentinel 3 et FCI/Meteosat Troisième Génération." Thesis, Toulon, 2016. http://www.theses.fr/2016TOUL0012/document.
Full textThe objective of this thesis was to propose, validate and compare fusion methods of images provided by a Low Earth Orbit multispectral sensor and a geostationary multispectral sensor in order to obtain water composition maps with spatial details and high temporal resolution. Our methodology was applied to OLCI Low Earth Orbit sensor on Sentinel-3 and FCI Geostationary Earth Orbit (GEO) sensor on Meteosat Third Generation. Firstly, the sensor sensivity, regarding the water color, was analyzed. As the images from both sensors were not available, they were simulated on the Golf of Lion, thanks to hydrosol maps (chl, SPM and CDOM) and radiative transfer models (Hydrolight and Modtran). Two fusion methods were then adapted and tested with the simulated images: the SSTF (Spatial, Spectral, Temporal Fusion) method inspired from the method developed by (Vanhellemont et al., 2014)) and the STARFM (Spatial Temporal Adaptative Reflectance Fusion Model) method from (Gao et al., 2006)). The fusion results were then validated with the simulated reference images and by estimating the hydrosol maps from the fusion images and comparing them with the input maps of the simulation process. To improve FCI SNR, a temporal filtering was proposed. Finally, as the aim is to obtain a water quality indicator, the fusion methods were adapted and tested on the hydrosol maps estimated with the FCI and OLCI simulated images
Hadmi, Azhar. "Protection des données visuelles : analyse des fonctions de hachage perceptuel." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20159/document.
Full textThe widespread use of multimedia technology has made it relatively easy to manipulate and tamper visual data. In particular, digital image processing and image manipulation tools offer facilities to intentionally alter image content without leaving perceptual traces. This presents a serious problem, particularly if the authenticity of the digital image is required. The image authentication should be based on their visual content and not on their binary content. Therefore, to authenticate an image, some acceptable manipulations that could undergoes an image, such as JPEG compression and Gaussian noise addition, must tolerated. Indeed, these manipulations preserve the visual appearance of the image. At the same time a perceptual hashing system should be sufficiently sensitive to detect malicious manipulations that modify the interpretation of the semantic content of the imagesuch as adding new objects, deleting or major modification of existing objects.In this thesis, we focus on perceptual hash functions for authentication and integrityverification of digital images. For this purpose, we present all aspects of perceptual hashfunctions. Then, we discuss the constraints that perceptual hashing system must satisfy tomeet desired level of robustness of perceptual signatures. Finally, we present a method toimprove the robustness and security of a system of perceptual hashing
Mehri, Maroua. "Historical document image analysis : a structural approach based on texture." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS005/document.
Full textOver the last few years, there has been tremendous growth in digitizing collections of cultural heritage documents. Thus, many challenges and open issues have been raised, such as information retrieval in digital libraries or analyzing page content of historical books. Recently, an important need has emerged which consists in designing a computer-aided characterization and categorization tool, able to index or group historical digitized book pages according to several criteria, mainly the layout structure and/or typographic/graphical characteristics of the historical document image content. Thus, the work conducted in this thesis presents an automatic approach for characterization and categorization of historical book pages. The proposed approach is applicable to a large variety of ancient books. In addition, it does not assume a priori knowledge regarding document image layout and content. It is based on the use of texture and graph algorithms to provide a rich and holistic description of the layout and content of the analyzed book pages to characterize and categorize historical book pages. The categorization is based on the characterization of the digitized page content by texture, shape, geometric and topological descriptors. This characterization is represented by a structural signature. More precisely, the signature-based characterization approach consists of two main stages. The first stage is extracting homogeneous regions. Then, the second one is proposing a graph-based page signature which is based on the extracted homogeneous regions, reflecting its layout and content. Afterwards, by comparing the different obtained graph-based signatures using a graph-matching paradigm, the similarities of digitized historical book page layout and/or content can be deduced. Subsequently, book pages with similar layout and/or content can be categorized and grouped, and a table of contents/summary of the analyzed digitized historical book can be provided automatically. As a consequence, numerous signature-based applications (e.g. information retrieval in digital libraries according to several criteria, page categorization) can be implemented for managing effectively a corpus or collections of books. To illustrate the effectiveness of the proposed page signature, a detailed experimental evaluation has been conducted in this work for assessing two possible categorization applications, unsupervised page classification and page stream segmentation. In addition, the different steps of the proposed approach have been evaluated on a large variety of historical document images
Larburu, Natacha. "Etude structurale de la biogenèse de la petite sous-unité ribosomique humaine par cryo-microscopie électronique et analyse d'images." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30336/document.
Full textRibosome biogenesis is a complex process that requires the production and the correct assembly of the 4 rRNAs with 80 ribosomal proteins. In Human, the production of the two subunits, 40S and 60S, is initiated by the transcription of a pre-ribosomal rRNA precursor to the mature 18S, 5.8S, and 28S rRNAs by the RNA polymerase I, which is chemically modified and trimmed by endo- and exoribonuclease, in order to form the mature rRNAs. The nascent pre rRNA associated with ribosomal proteins, small ribonucleoprotein particles (snoRNP) and so called co-factors leading to the assembly of an initial 90S particle. This particle is then split into pre-40S and pre-60S pre-ribosomal particles that fallow independent maturation to form the mature subunit into the cytoplasm. Production of eukaryotic ribosomes implies the transient intervention of more than 200 associated proteins and ribonucleoprotein particles, that are absent from the mature subunits. Synthesis of ribosome, globally conserved in eukaryotes, has been principally studied in yeast. However, recent studies reveal that this process is more complex in human compared in yeast. An important bottleneck in this domain is the lack of structural data concerning the formation of intermediate ribosomal subunits to understand the function of assembly factors. Determination of the structural remodeling of pre-ribosomal particles is crucial to understand the molecular mechanism of this complex process. So I have undertaken a structural study on the assembly of the small ribosomal subunit using cryo-electron microscopy and image analysis. The goal of my thesis is to determine the 3D structures of human pre-40S particles at different maturation stages to see the structural remodeling that occurs during the biogenesis of the small ribosomal subunit. We are collaborating with the group of Pr Ulrike Kutay at ETH Zurich, who purify human pre-40S particles. The 3D structures of human pre-40S particles purified at an intermediate and late maturation stages, has been determined with a resolution of 19 and 15Å respectively. Supplementary densities, compared to the mature subunit, indicate the presence of assembly factors and show the unexpected presence of the RACK1 protein in the precursor of the human small ribosomal subunit in the cytoplasm. The comparison of the 3D structures of human pre-40S particle allows showing the structural remodeling that occur during the maturation of the small ribosomal subunit. This work provides the first 3D structure of human pre-40S particles and laid the methodological foundations for future exploration of the structural dynamics of pre-ribosomal particles
Elbergui, Ayda. "Amélioration des techniques de reconnaissance automatique de mines marines par analyse de l'écho à partir d'images sonar haute résolution." Thesis, Brest, 2013. http://www.theses.fr/2013BRES0042/document.
Full textUnderwater target classification is mainly based on the analysis of the acoustic shadows. The new generation of imaging sonar provides a more accurate description of the acoustic wave scattered by the targets. Therefore, combining the analysis of shadows and echoes is a promising way to improve automated target classification. Some reliable schemes for automated target classification rely on model based learning instead of only using experimental samples of target acoustic response to train the classifier. With this approach, a good performance level in classification can be obtained if the modeling of the target acoustic response is accurate enough. The implementation of the classification method first consists in precisely modeling the acoustic response of the targets. The result of the modeling process is a simulator called SIS (Sonar Image Simulator). As imaging sonars operate at high or very high frequency the core of the model is based on acoustical ray-tracing. Several phenomena have been considered to increase the realism of the acoustic response (multi-path propagation, interaction with the surrounding seabed, edge diffraction, etc.). The first step of the classifier consists of a model-based approach. The classification method uses the highlight information of the acoustic signature of the target called « A-scan ». This method consists in comparing the A-scan of the detected target with a set of simulated A-scans generated by SIS in the same operational conditions. To train the classifier, a Template base (A-scans) is created by modeling manmade objects of simple and complex shapes (Mine Like Objects or not). It is based on matched filtering in order to allow more flexible result by introducing a degree of match related to the maximum of correlation coefficient. With this approach the training set can be extended increasingly to improve classification when classes are strongly correlated. If the difference between the correlation coefficients of the most likely classes is not sufficient the result is considered ambiguous. A second stage is proposed in order to discriminate these classes by adding new features and/or extending the initial training data set by including more A-scans in new configurations derived from the ambiguous ones. This classification process is mainly assessed on simulated side scan sonar data but also on a limited data set of real data. The use of A-scans have achieved good classification performances in a mono-view configuration and can improve the result of classification for some remaining confusions using methods only based on shadow analysis
Bouyrie, Mathieu. "Restauration d'images de noyaux cellulaires en microscopie 3D par l'introduction de connaissance a priori." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLA032/document.
Full textIn this this document, we present a method to denoise 3D images acquired by 2-photon microscopy and displaying cell nuclei of animal embryos. The specimens are observed in toto and in vivo during their early development. Image deterioration can be explained by the microscope optical flaws, the acquisition system limitations, and light absorption and diffusion through the tissue depth.The proposed method is a 3D adaptation of a 2D method so far applied to astronomical images and it also differs from state-of the of-the-art methods by the introduction of priors on the biological data. Our hypotheses include assuming that noise statistics are Mixed Poisson Gaussian (MPG) and that cell nuclei are quasi spherical.To implement our method in 3D, we had to take into account the sampling grid dimensions which are different in the x, y or z directions. A spherical object imaged on this grid loses this property. To deal with such a grid, we had to interpret the filtering process, which is a core element of the original theory, as a diffusion process
Sellami, Akrem. "Interprétation sémantique d'images hyperspectrales basée sur la réduction adaptative de dimensionnalité." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0037/document.
Full textHyperspectral imagery allows to acquire a rich spectral information of a scene in several hundred or even thousands of narrow and contiguous spectral bands. However, with the high number of spectral bands, the strong inter-bands spectral correlation and the redundancy of spectro-spatial information, the interpretation of these massive hyperspectral data is one of the major challenges for the remote sensing scientific community. In this context, the major challenge is to reduce the number of unnecessary spectral bands, that is, to reduce the redundancy and high correlation of spectral bands while preserving the relevant information. Therefore, projection approaches aim to transform the hyperspectral data into a reduced subspace by combining all original spectral bands. In addition, band selection approaches attempt to find a subset of relevant spectral bands. In this thesis, firstly we focus on hyperspectral images classification attempting to integrate the spectro-spatial information into dimension reduction in order to improve the classification performance and to overcome the loss of spatial information in projection approaches.Therefore, we propose a hybrid model to preserve the spectro-spatial information exploiting the tensor model in the locality preserving projection approach (TLPP) and to use the constraint band selection (CBS) as unsupervised approach to select the discriminant spectral bands. To model the uncertainty and imperfection of these reduction approaches and classifiers, we propose an evidential approach based on the Dempster-Shafer Theory (DST). In the second step, we try to extend the hybrid model by exploiting the semantic knowledge extracted through the features obtained by the previously proposed approach TLPP to enrich the CBS technique. Indeed, the proposed approach makes it possible to select a relevant spectral bands which are at the same time informative, discriminant, distinctive and not very redundant. In fact, this approach selects the discriminant and distinctive spectral bands using the CBS technique injecting the extracted rules obtained with knowledge extraction techniques to automatically and adaptively select the optimal subset of relevant spectral bands. The performance of our approach is evaluated using several real hyperspectral data
Marciano, Abraham. "Méthodes d'Analyse et de Recalage d'images radiographiques de fret et de Véhicules." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED040/document.
Full textOur societies, faced with an unprecedented level of security threat since WWII, must provide fast and adaptable solutions to cope with a new kind of menace. Illicit trade also, oftencorrelated with criminal actions, is viewed as a defining stake by governments and agencies. Enforcement authorities are thus very demandingin terms of technological features, asthey explicitly aim at automating inspection processes. The main objective of our research is to develop assisting tools to detect weapons and narcotics for lawenforcement officers. In the present work, we intend to employ and customize both advanced classification and image registration techniques for irregularity detection in X-ray cargo screening scans. Rather than employing machine-learning recognition techniques, our methods prove to be very efficient while targeting a very diverse type of threats from which no specific features can be extracted. Moreover, the proposed techniques significantly enhance the detection capabilities for law-enforcement officers, particularly in dense regions where both humans or trained learning models would probably fail. Our work reviews state-of-the art methods in terms of classification and image registration. Various numerical solutions are also explored. The proposed algorithms are tested on a very large number ofimages, showing their necessity and performances both visually and numerically
Li, Yuet Hee Mary Lynn. "Caractérisation texturale et analyse par stéréocorrélation d'images de la déformation des fromages à pâte molle et de leurs simulants formulés." Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL062N/document.
Full textVarious gels formulated from mixtures of gelatin and polysaccharides - guar, karaya gum, xanthan gum, maltodextrin and starch - were elaborated to simulate soft cheeses (Camembert and Coulommiers) texture. Comparisons between gels and cheeses were based on firmness, elasticity modulus and relaxation time constants, obtained from penetrometry and stress relaxation tests. Gels made up of gelatin, maltodextrin and starch were found to imitate best the textural properties of the soft cheeses. A three-component mixture design approach was used to determine the optimum component concentration of the simulants. Mathematical models developed showed linear dependence of the rheological parameters on composition of simulants. Enzyme Subtilisin Carlsberg (Alcalase®) successful induced gradual modifications in rheological parameters of simulants. The rate of change of textural properties occurring in Coulommiers cheese during maturation was however different from that of the simulants. Two optical three-dimensional techniques as new tools for food texture assessment were also investigated. Digital image correlation and the Breuckmann scanning systems were successful in distinguishing between gels and cheeses varying in firmness and viscoelastic properties. New parameters obtained from digital image correlation and Breuckmann scanning systems were related to the textural properties of the cheeses and their simulants. These parameters may be used to develop models predicting accurately the sensory texture of food from instrumental measurements
Oriot, Jean-Claude. "Analyse d'images de documents à structures variées : application à la localisation du bloc adresse sur les objets postaux." Nantes, 1992. http://www.theses.fr/1992NANT2061.
Full textGarnier, Mickaël. "Modèles descriptifs de relations spatiales pour l'aide au diagnostic d'images biomédicales." Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05S015/document.
Full textDuring the last decade, digital pathology has been improved thanks to the advance of image analysis algorithms and calculus power. Particularly, it is more and more based on histology images. This modality of images presents the advantage of showing only the biological objects targeted by the pathologists using specific stains while preserving as unharmed as possible the tissue structure. Numerous computer-aided diagnosis methods using these images have been developed this past few years in order to assist the medical experts with quantitative measurements. The studies presented in this thesis aim at adressing the challenges related to histology image analysis, as well as at developing an assisted diagnosis model mainly based on spatial relations, an information that currently used methods rarely use. A multiscale texture analysis is first proposed and applied to detect the presence of diseased tissue. A descriptor named Force Histogram Decomposition (FHD) is then introduced in order to extract the shapes and spatial organisation of regions within an object. Finally, histology images are described by the FHD measured on their different types of tissue and also on the stained biological objects inside every types of tissue. Preliminary studies showed that the FHD are able to accurately recognise objects on uniform backgrounds, including when spatial relations are supposed to hold no relevant information. Besides, the texture analysis method proved to be satisfactory in two different medical applications, namely histology images and fundus photographies. The performance of these methods are highlighted by a comparison with the usual approaches in their respectives fields. Finally, the complete method has been applied to assess the severity of cancers on two sets of histology images. The first one is given as part of the ANR project SPIRIT and presents metastatic mice livers. The other one comes from the challenge ICPR 2014 : Nuclear Atypia and contains human breast tissues. The analysis of spatial relations and shapes at two different scales achieves a correct recognition of metastatic cancer grades of 87.0 % and gives insight about the nuclear atypia grade. This proves the efficiency of the method as well as the relevance of measuring the spatial organisation in this particular type of images
Kennel, Pol. "Caractérisation de texture par analyse en ondelettes complexes pour la segmentation d’image : applications en télédétection et en écologie forestière." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20215/document.
Full textThe analysis of digital images, albeit widely researched, continues to present a real challenge today. In the case of several applications which aim to produce an appropriate description and semantic recognition of image content, particular attention is required to be given to image analysis. In response to such requirements, image content analysis is carried out automatically with the help of computational methods that tend towards the domains of mathematics, statistics and physics. The use of image segmentation methods is a relevant and recognized way to represent objects observed in images. Coupled with classification, segmentation allows a semantic segregation of these objects. However, existing methods cannot be considered to be generic, and despite having been inspired by various domains (military, medical, satellite etc), they are continuously subject to reevaluation, adaptation or improvement. For example satellite images stand out in the image domain in terms of the specificity of their mode of acquisition, their format, or the object of observation (the Earth, in this case).The aim of the present thesis is to explore, by exploiting the notion of texture, methods of digital image characterization and supervised segmentation. Land, observed from space at different scales and resolutions, could be perceived as being textured. Land-use maps could be obtained through the segmentation of satellite images, in particular through the use of textural information. We propose to develop competitive algorithms of segmentation to characterize texture, using multi-scale representations of images obtained by wavelet decomposition and supervised classifiers such as Support Vector Machines.Given this context, the present thesis is principally articulated around various research projects which require the study of images at different scales and resolutions, and which are varying in nature (eg. multi-spectral, optic, LiDAR). Certain aspects of the methodology developed are applied to the different case studies undertaken
Jiang, Zhifan. "Évaluation des mobilités et modélisation géométrique du système pelvien féminin par analyse d’images médicales." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10003/document.
Full textThe better treatment of female pelvic mobility disorders has a social impact affecting particularly aged women. It is in this context that this thesis focuses on the development of methods in medical image analysis, for the evaluation of pelvic mobility and the geometric modeling of the pelvic organs. For this purpose, we provide solutions based on the registration of deformable models on Magnetic Resonance Images (MRI). All the results are able to detect the shape and quantify the movement of a part of the organs and to reconstruct their surfaces from patient-specific MRI. This work facilitates the simulation of the behavior of the pelvic organs using finite element method. The objective of these developed tools is to help to better understand the mechanism of the pathologies. They will finally allow to better predict the presence of certain diseases, as well as make surgical procedures more accurate and personalized