Rozprawy doktorskie na temat „Segmentation non supervisée d'images”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Segmentation non supervisée d'images”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Fernandes, Clément. "Chaînes de Markov triplets et segmentation non supervisée d'images". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS019.
Pełny tekst źródłaHidden Markov chains (HMC) are widely used in unsupervised Bayesian hidden discrete data restoration. They are very robust and, in spite of their simplicity, they are sufficiently efficient in many situations. In particular for image segmentation, despite their mono-dimensional nature, they are able, through a transformation of the bi-dimensional images into mono-dimensional sequences with Peano scan (PS), to give satisfying results. However, sometimes, more complex models such as hidden Markov fields (HMF) may be preferred in spite of their increased time complexity, for their better results. Moreover, hidden Markov models (the chains as well as the fields) have been extended to pairwise and triplet Markov models, which can be of interest in more complex situations. For example, when sojourn time in hidden states is not geometrical, hidden semi-Markov (HSMC) chains tend to perform better than HMC, and such is also the case for hidden evidential Markov chains (HEMC) when data are non-stationary. In this thesis, we first propose a new triplet Markov chain (TMC), which simultaneously extends HSMC and HEMC. Based on hidden triplet Markov chains (HTMC), the new hidden evidential semi-Markov chain (HESMC) model can be used in unsupervised framework, parameters being estimated with Expectation-Maximization (EM) algorithm. We validate its interest through some experiments on synthetic data. Then we address the problem of mono-dimensionality of the HMC with PS model in image segmentation by introducing the “contextual” Peano scan (CPS). It consists in associating to each index in the HMC obtained from PS, two observations on pixels which are neighbors of the pixel considered in the image, but are not its neighbors in the HMC. This gives three observations on each point of the Peano scan, which leads to a new conditional Markov chain (CMC) with a more complex structure, but whose posterior law is still Markovian. Therefore, we can apply the usual parameter estimation method: Stochastic Expectation-Maximization (SEM), as well as study unsupervised segmentation Marginal Posterior Mode (MPM) so obtained. The CMC with CPS based supervised and unsupervised MPM are compared to the classic scan based HMC-PS and the HMF through experiments on artificial images. They improve notably the former, and can even compete with the latter. Finally, we extend the CMC-CPS to Pairwise Conditional Markov (CPMC) chains and two particular triplet conditional Markov chain: evidential conditional Markov chains (CEMC) and conditional semi-Markov chains (CSMC). For each of these extensions, we show through experiments on artificial images that these models can improve notably their non conditional counterpart, as well as the CMC with CPS, and can even compete with the HMF. Beside they allow the generality of markovian triplets to better play its part in image segmentation, while avoiding the substantial time complexity of triplet Markov fields
Benboudjema, Dalila. "Champs de Markov triplets et segmentation bayésienne non supervisée d'images". Evry, Institut national des télécommunications, 2005. http://www.theses.fr/2005TELE0009.
Pełny tekst źródłaImage segmentation is a fundamental and yet difficult task in machine vision. Several models and approaches have been proposed, and the ones which have probably received considerable attention are hidden Markov fields (HMF) models. In such model the hidden field X which is assumed Markovian, must be estimated from the observed –or noisy- field Y. Such processing is possible because the distribution X conditional on the observed process Y remains markovian. This model has been generalized to the Pairwise Markov field (PMF) which offer similar processing and superior modelling capabilities. In this model we assume directly the markovianity of the couple (X,Y ). Afterwards, triplet Markov fields (TMF) which are the generalization of the PMF, have been proposed. In such model the distribution of the couple (X ,Y ) is the marginal distribution of a Markov field T = (X ,U,Y ) , where U is latent process. The aim of this thesis is to study the TMF models. Two original models are presented: the Evidential Markov field (EMF) allowing to model the evidential aspects of the prior information and the adapted triplet Markov field (ATMF), allowing to model the simultaneous presence of different stationarities in the class image. For the unsupervised processing, two original approaches of estimation the model’s parameters have been proposed. The first one is based on the stochastic gradient and the second one is based on the iterative conditional estimation (ICE) and the least square method, as well. The latter, have then been generalized to the non stationary images with non Gaussian correlated noise, which uses the Pearson system to find the natures of margins of the noise, which can vary with the class. Experiments indicate that the new models and related processing algorithms can improve the results obtained with the classical ones
Fontaine, Michaël. "Segmentation non supervisée d'images couleur par analyse de la connexité des pixels". Lille 1, 2001. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2001/50376-2001-305-306.pdf.
Pełny tekst źródłaPeng, Anrong. "Segmentation statistique non supervisée d'images et de détection de contours par filtrage". Compiègne, 1992. http://www.theses.fr/1992COMPD512.
Pełny tekst źródłaEl, Asmar Saadallah. "Contributions à la segmentation non supervisée d'images hyperspectrales : trois approches algébriques et géométriques". Thesis, La Rochelle, 2016. http://www.theses.fr/2016LAROS023/document.
Pełny tekst źródłaHyperspectral images provided by modern spectrometers are composed of reflectance values at hundreds of narrow spectral bands covering a wide range of the electromagnetic spectrum. Since spectral reflectance differs for most of the materials or objects present in a given scene, hyperspectral image processing and analysis find many real-life applications. We address in this work the problem of unsupervised hyperspectral image segmentation following three distinct approaches. The first one is of Graph Embedding type and necessitates two steps : first, pixels of the original image patchs are compared using a spectral similarity measure and then objects obtained by local segmentations are fusioned by means of a similarity measure between objects. The second one is of Spectral Hashing or Semantic Hashing type. We first define a binary encoding of spectral variations and then propose a clustering segmentation relying on a k- mode classification algorithm adapted to the categorical nature of the data, the chosen distance being a generalized version of the classical Hamming distance. In the third one, we take advantage of the geometric information given by the manifolds associated to the images. Using the metric properties of the space of Riemannian metrics, that is the space of symmetric positive definite matrices, endowed with the so-called Fisher Rao metric, we propose a k-means algorithm to obtain a cluster partitioning of the image
Saint, Michel Thierry. "Filtrage non linéaire en vue d'une segmentation semi supervisée appliquée à l'imagerie médicale". Lille 1, 1997. http://www.theses.fr/1997LIL10110.
Pełny tekst źródłaMignotte, Max. "Segmentation d'images sonar par approche markovienne hiérarchique non supervisée et classification d'ombres portées par modèles statistiques". Brest, 1998. http://www.theses.fr/1998BRES2017.
Pełny tekst źródłaQuelle, Hans-Christoph. "Segmentation bayesienne non supervisee en imagerie radar". Rennes 1, 1993. http://www.theses.fr/1993REN10012.
Pełny tekst źródłaMartel-Brisson, Nicolas. "Approche non supervisée de segmentation de bas niveau dans un cadre de surveillance vidéo d'environnements non contrôlés". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29093/29093.pdf.
Pełny tekst źródłaGiordana, Nathalie. "Segmentation non supervisee d'images multi-spectrales par chaines de markov cachees". Compiègne, 1996. http://www.theses.fr/1996COMP981S.
Pełny tekst źródłaPENG, ANRONG. "Segmentation statistique non supervisee d'images et detection de contours par filtrage". Compiègne, 1992. http://www.theses.fr/1992COMP0512.
Pełny tekst źródłaBENMILOUD, BTISSAM. "Chaines de markov cachees et segmentation non supervisee de sequences d'images". Paris 7, 1994. http://www.theses.fr/1994PA077120.
Pełny tekst źródłaKurtz, Camille. "Une approche collaborative segmentation - classification pour l'analyse descendante d'images multirésolutions". Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00735217.
Pełny tekst źródłaYahiaoui, Meriem. "Modèles statistiques avancés pour la segmentation non supervisée des images dégradées de l'iris". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL006.
Pełny tekst źródłaIris is considered as one of the most robust and efficient modalities in biometrics because of its low error rates. These performances were observed in controlled situations, which impose constraints during the acquisition in order to have good quality images. The renouncement of these constraints, at least partially, implies degradations in the quality of the acquired images and it is therefore a degradation of these systems’ performances. One of the main proposed solutions in the literature to take into account these limits is to propose a robust approach for iris segmentation. The main objective of this thesis is to propose original methods for the segmentation of degraded images of the iris. Markov chains have been well solicited to solve image segmentation problems. In this context, a feasibility study of unsupervised segmentation into regions of degraded iris images by Markov chains was performed. Different image transformations and different segmentation methods for parameters initialization have been studied and compared. Optimal modeling has been inserted in iris recognition system (with grayscale images) to produce a comparison with the existing methods. Finally, an extension of the modeling based on the hidden Markov chains has been developed in order to realize an unsupervised segmentation of the iris images acquired in visible light
Fontaine, Michaël Macaire Ludovic Postaire Jack-Gérard. "Segmentation non supervisée d'images couleur par analyse de la connexité des pixels". [S.l.] : [s.n.], 2001. http://www.univ-lille1.fr/bustl-grisemine/pdf/extheses/50376-2001-305-306.pdf.
Pełny tekst źródłaHijazi, Hala. "Proposition d'une méthode spectrale combinée LDA et LLE pour la réduction non-linéaire de dimension : Application à la segmentation d'images couleurs". Thesis, Littoral, 2013. http://www.theses.fr/2013DUNK0516.
Pełny tekst źródłaData analysis and learning methods have known a huge development during these last years. Indeed, after neural networks, kernel methods in the 90', spectral methods appeared in the years 2000. Spectral methods provide an unified mathematical framework to expand new original classification methods. Among these new techniques, two methods can be highlighted : LLE for non-linear dimension reduction and LDA as discriminating classification method. In this thesis document a new classification technique is proposed combining LLE and LDA methods. This new method makes it possible to provide efficient non-linear dimension reduction and discrimination. Then an extension of the method to semi-supervised learning is proposed. Good properties of dimension reduction and discrimination associated with the sparsity property of the LLE technique make it possible to apply our method to color images segmentation with success. Semi-supervised version of our method leads to efficient segmentation of noisy color images. These results have to be extended and compared with other state-of-the-art methods. Nevertheless interesting perspectives of this work are proposed in conclusion for future developments
Yahiaoui, Meriem. "Modèles statistiques avancés pour la segmentation non supervisée des images dégradées de l'iris". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL006/document.
Pełny tekst źródłaIris is considered as one of the most robust and efficient modalities in biometrics because of its low error rates. These performances were observed in controlled situations, which impose constraints during the acquisition in order to have good quality images. The renouncement of these constraints, at least partially, implies degradations in the quality of the acquired images and it is therefore a degradation of these systems’ performances. One of the main proposed solutions in the literature to take into account these limits is to propose a robust approach for iris segmentation. The main objective of this thesis is to propose original methods for the segmentation of degraded images of the iris. Markov chains have been well solicited to solve image segmentation problems. In this context, a feasibility study of unsupervised segmentation into regions of degraded iris images by Markov chains was performed. Different image transformations and different segmentation methods for parameters initialization have been studied and compared. Optimal modeling has been inserted in iris recognition system (with grayscale images) to produce a comparison with the existing methods. Finally, an extension of the modeling based on the hidden Markov chains has been developed in order to realize an unsupervised segmentation of the iris images acquired in visible light
OULD, AHMEDOU MOHAMED LEMINE. "Ameliorations de methodes de classification automatique non supervisee pour la segmentation d'images multi-composantes". Reims, 1998. http://www.theses.fr/1998REIMS017.
Pełny tekst źródłaConstantinides, Constantin. "Segmentation automatisée du ventricule gauche en IRM cardiaque : Evaluation supervisée et non supervisée de cette approche et application à l'étude de la viabilité myocardique". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0034.
Pełny tekst źródłaThe aim of this work is to perform an automated segmentation of the Left Ventricle on short-axis cardiac MR images with as few user interactions as possible. Based on a recently developed semi-automated segmentation method, a fully automated segmentation method is proposed that includes three main steps: the heart localization, the definition of a region of interest around the left ventricle, and finally its segmentation. The algorithm developed here takes into account anatomic and functional a priori information such as the temporal features of the heartbeat, the pseudo-circular shape of the LV, and the 3D continuity, combined with the image intensity features. The segmentation process is achieved using deformable models combined with morphological filters, which improve the model performances when dealing with heterogeneous gray levels within the cavity. The work achieved within the MedIEval group (Medical Imaging Evaluation) allowed to compare both proposed methods with 6 other methods, including 3 manual delineations by experts. In particular, an approach for ranking segmentation methods without using a gold standard was applied to the ejection fractions estimated by the 8 methods. Finally, the proposed segmentation method was used in a clinical research work about the regional contraction and thequantification of the myocardial infarction extent.Future work includes the automated segmentation of the right ventricle as well as the estimation of a robust mutual shape from several segmentation methods
Liu, Siwei. "Apport d'un algorithme de segmentation ultra-rapide et non supervisé pour la conception de techniques de segmentation d'images bruitées". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4371.
Pełny tekst źródłaImage segmentation is an important step in many image processing systems and many problems remain unsolved. It has recently been shown that when the image is composed of two homogeneous regions, polygonal active contour techniques based on the minimization of a criterion derived from information theory allow achieving an ultra-fast algorithm which requires neither parameter to tune in the optimized criterion, nor a priori knowledge on the gray level fluctuations. This algorithm can then be used as a fast and unsupervised processing module. The objective of this thesis is therefore to show how this ultra-fast and unsupervised algorithm can be used as a module in the conception of more complex segmentation techniques, allowing to overcome several limits and particularly:- to be robust to the presence of strong inhomogeneity in the image which is often inherent in the acquisition process, such as non-uniform illumination, attenuation, etc.;- to be able to segment disconnected objects by polygonal active contour without complicating the optimization strategy;- to segment multi-region images while estimating in an unsupervised way the number of homogeneous regions in the image.For each of these three problems, unsupervised segmentation techniques based on the optimization of Minimum Description Length criteria have been obtained, which do not require the tuning of parameter by user or a priori information on the kind of noise in the image. Moreover, it has been shown that fast segmentation techniques can be achieved using this segmentation module, while keeping reduced implementation complexity
Constantinides, Constantin. "Segmentation automatisée du ventricule gauche en IRM cardiaque : Evaluation supervisée et non supervisée de cette approche et application à l'étude de la viabilité myocardique". Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00982333.
Pełny tekst źródłaDERRAS, MUSTAPHA. "Segmentation non supervisee d'images texturees par champs de markov : application a l'automatisation de l'entretien des espaces naturels". Clermont-Ferrand 2, 1993. http://www.theses.fr/1993CLF21562.
Pełny tekst źródłaChung, François. "Modélisation de l'apparence de régions pour la segmentation d'images basée modèle". Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00575796.
Pełny tekst źródłaChahine, Chaza. "Fusion d'informations par la théorie de l'évidence pour la segmentation d'images". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1030/document.
Pełny tekst źródłaInformation fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance
Cutrona, Jérôme. "Analyse de forme des objets biologiques : représentation, classification et suivi temporel". Reims, 2003. http://www.theses.fr/2003REIMS018.
Pełny tekst źródłaN biology, the relationship between shape, a major element in computer vision, and function has been emphasized since a long time. This thesis proposes a processing line leading to unsupervised shape classification, deformation tracking and supervised classification of whole population of objects. We first propose a contribution to unsupervised segmentation based on a fuzzy classification method and two semi-automatic methods founded on fuzzy connectedness and watersheds. Next, we perform a study on several shape descriptors including primitives and anti-primitives, contour, silhouete and multi-scale curvature. After shape matching, the descriptors are submitted to statistical analysis to highlight the modes of variations within the samples. The obtained statistical model is the basis of the proposed applications
Carel, Elodie. "Segmentation de documents administratifs en couches couleur". Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS014/document.
Pełny tekst źródłaIndustrial companies receive huge volumes of documents everyday. Automation, traceability, feeding information systems, reducing costs and processing times, dematerialization has a clear economic impact. In order to respect the industrial constraints, the traditional digitization process simplifies the images by performing a background/foreground separation. However, this binarization can lead to some segmentation and recognition errors. With the improvements of technology, the community of document analysis has shown a growing interest in the integration of color information in the process to enhance its performance. In order to work within the scope provided by our industrial partner in the digitization flow, an unsupervised segmentation approach was chosen. Our goal is to be able to cope with document images, even when they are encountered for the first time, regardless their content, their structure, and their color properties. To this end, the first issue of this project was to identify a reasonable number of main colors which are observable on an image. Then, we aim to group pixels having both close color properties and a logical or semantic unit into consistent color layers. Thus, provided as a set of binary images, these layers may be reinjected into the digitization chain as an alternative to the conventional binarization. Moreover, they also provide extra-information about colors which could be exploited for segmentation purpose, elements spotting, or as a descriptor. Therefore, we have proposed a spatio-colorimetric approach which gives a set of local regions, known as superpixels, which are perceptually meaningful. Their size is adapted to the content of the document images. These regions are then merged into global color layers by means of a multiresolution analysis
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026.
Pełny tekst źródłaNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Debeir, Olivier. "Segmentation supervisée d'images". Doctoral thesis, Universite Libre de Bruxelles, 2001. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211474.
Pełny tekst źródłaPascal, Barbara. "Estimation régularisée d'attributs fractals par minimisation convexe pour la segmentation de textures : formulations variationnelles conjointes, algorithmes proximaux rapides et sélection non supervisée des paramètres de régularisation; Applications à l'étude du frottement solide et de la microfluidique des écoulements multiphasiques". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN042.
Pełny tekst źródłaIn this doctoral thesis several scale-free texture segmentation procedures based on two fractal attributes, the Hölder exponent, measuring the local regularity of a texture, and local variance, are proposed.A piecewise homogeneous fractal texture model is built, along with a synthesis procedure, providing images composed of the aggregation of fractal texture patches with known attributes and segmentation. This synthesis procedure is used to evaluate the proposed methods performance.A first method, based on the Total Variation regularization of a noisy estimate of local regularity, is illustrated and refined thanks to a post-processing step consisting in an iterative thresholding and resulting in a segmentation.After evidencing the limitations of this first approach, deux segmentation methods, with either "free" or "co-located" contours, are built, taking in account jointly the local regularity and the local variance.These two procedures are formulated as convex nonsmooth functional minimization problems.We show that the two functionals, with "free" and "co-located" penalizations, are both strongly-convex. and compute their respective strong convexity moduli.Several minimization schemes are derived, and their convergence speed are compared.The segmentation performance of the different methods are evaluated over a large amount of synthetic data in configurations of increasing difficulty, as well as on real world images, and compared to state-of-the-art procedures, including convolutional neural networks.An application for the segmentation of multiphasic flow through a porous medium experiment images is presented.Finally, a strategy for automated selection of the hyperparameters of the "free" and "co-located" functionals is built, inspired from the SURE estimator of the quadratic risk
Thivin, Solenne. "Détection automatique de cibles dans des fonds complexes. Pour des images ou séquences d'images". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS235/document.
Pełny tekst źródłaDuring this PHD, we developped an detection algorithm. Our principal objective was to detect small targets in a complex background like clouds for example.For this, we used the spatial covariate structure of the real images.First, we developped a collection of models for this covariate structure. Then, we selected a special model in the previous collection. Once the model selected, we applied the likelihood ratio test to detect the potential targets.We finally studied the performances of our algorithm by testing it on simulated and real images
Sublime, Jérémie. "Contributions au clustering collaboratif et à ses potentielles applications en imagerie à très haute résolution". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLA005/document.
Pełny tekst źródłaThis thesis presents several algorithms developed in the context of the ANR COCLICO project and contains two main axis: The first axis is concerned with introducing Markov Random Fields (MRF) based models to provide a semantic rich and suited algorithm applicable to images that are already segmented. This method is based on the Iterated Conditional Modes Algorithm (ICM algorithm) and can be applied to the segments of very high resolution (VHR) satellite pictures. Our proposed method can cope with highly irregular neighborhood dependencies and provides some low level semantic information on the clusters and their relationship within the image. The second axis deals with collaborative clustering methods developed with the goal of being applicable to as many clustering algorithms as possible, including the algorithms used in the first axis of this work. A key feature of the methods proposed in this thesis is that they can deal with either of the following two cases: 1) several clustering algorithms working together on the same data represented in different feature spaces, 2) several clustering algorithms looking for similar clusters in different data sets having similar distributions. Clustering algorithms to which these methods are applicable include the ICM algorithm, the K-Means algorithm, density based algorithms such as DB-scan, all Expectation-Maximization (EM) based algorithms such as the Self-Organizing Maps (SOM) and the Generative Topographic Mapping (GTM) algorithms. Unlike previously introduced methods, our models have no restrictions in term of types of algorithms that can collaborate together, do not require that all methods be looking for the same number of clusters, and are provided with solid mathematical foundations
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026/document.
Pełny tekst źródłaNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.
Pełny tekst źródłaAccess to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Nait-Chabane, Ahmed. "Segmentation invariante en rasance des images sonar latéral par une approche neuronale compétitive". Phd thesis, Université de Bretagne occidentale - Brest, 2013. http://tel.archives-ouvertes.fr/tel-00968199.
Pełny tekst źródłaFaucheux, Cyrille. "Segmentation supervisée d'images texturées par régularisation de graphes". Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4050/document.
Pełny tekst źródłaIn this thesis, we improve a recent image segmentation algorithm based on a graph regularization process. The goal of this method is to compute an indicator function that satisfies a regularity and a fidelity criteria. Its particularity is to represent images with similarity graphs. This data structure allows relations to be established between similar pixels, leading to non-local processing of the data. In order to improve this approach, combine it with another non-local one: the texture features. Two solutions are developped, both based on Haralick features. In the first one, we propose a new fidelity term which is based on the work of Chan and Vese and is able to evaluate the homogeneity of texture features. In the second method, we propose to replace the fidelity criteria by the output of a supervised classifier. Trained to recognize several textures, the classifier is able to produce a better modelization of the problem by identifying the most relevant texture features. This method is also extended to multiclass segmentation problems. Both are applied to 2D and 3D textured images
Le, Hégarat Sylvie. "Classification non supervisée d'images SAR polarimétriques". Paris, ENST, 1996. http://www.theses.fr/1996ENST0024.
Pełny tekst źródłaAchard, Catherine. "Segmentation en régions non-supervisée par relaxation markovienne". Clermont-Ferrand 2, 1996. http://www.theses.fr/1996CLF21820.
Pełny tekst źródłaGoubet, Étienne. "Contrôle non destructif par analyse supervisée d'images 3D ultrasonores". Cachan, Ecole normale supérieure, 1999. http://www.theses.fr/1999DENS0011.
Pełny tekst źródłaLanchantin, Pierre. "Chaînes de Markov triplets et segmentation non supervisée de signaux". Evry, Institut national des télécommunications, 2006. http://www.theses.fr/2006TELE0012.
Pełny tekst źródłaThe aim of this thesis is to propose original methods of unsupervised signal and image segmentation , based on triplet Markov and partially pairwise Markov models. We first describe different models with increasing generality and develop inference and parameters estimation algorithms in the monodimensional case ( chains). Then we propose and study particular cases of triplet partially Markov chains, starting with a model of pairwise partially Markov chains to the segmentation of centured gaussian processes with long correlation noise. The segmentation of centured gaussian processes with long correlation noise. Finally, we propose a triplet Markov chains model adapted to the segmentation of non stationary hidden processes. We also study the extension possibilites of classical probabilistic models ( chains and trees) in an evidential model, where the posterior hidden process distribution is given by the Dempster-Shafer fusion and in a "fuzzy "model in which the mebership function is fuzzy
Khemiri, Houssemeddine. "Approche générique appliquée à l'indexation audio par modélisation non supervisée". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0055/document.
Pełny tekst źródłaThe amount of available audio data, such as broadcast news archives, radio recordings, music and songs collections, podcasts or various internet media is constantly increasing. Therefore many audio indexing techniques are proposed in order to help users to browse audio documents. Nevertheless, these methods are developed for a specific audio content which makes them unsuitable to simultaneously treat audio streams where different types of audio document coexist. In this thesis we report our recent efforts in extending the ALISP approach developed for speech as a generic method for audio indexing, retrieval and recognition. The particularity of ALISP tools is that no textual transcriptions are needed during the learning step. Any input speech data is transformed into a sequence of arbitrary symbols. These symbols can be used for indexing purposes. The main contribution of this thesis is the exploitation of the ALISP approach as a generic method for audio indexing. The proposed system consists of three steps; an unsupervised training to model and acquire the ALISP HMM models, ALISP segmentation of audio data using the ALISP HMM models and a comparison of ALISP symbols using the BLAST algorithm and Levenshtein distance. The evaluations of the proposed systems are done on the YACAST and other publicly available corpora for several tasks of audio indexing
Degans, Aude. "Approche non supervisée de l'analyse automatique du sommeil : comparaison d'algorithmes de segmentation". Troyes, 2006. http://www.theses.fr/2006TROY0014.
Pełny tekst źródłaOur work deals with automatic sleep analysis. For 40 years, the sleep staging rules proposed by Rechstchaffen and Kales remained the international reference in sleep analysis. Despite the compact profile of sleep they bring, they observe it as a discrete process and do not care about the microstructure. Algorithms for automatic sleep staging emerged and some researchers proposed new parameters that reflect the continuity and the dynamics of the sleep process. By some of them (such as spectral analysis) were developed to be applied on stationary signals. Consequently, applying them on EEG signals occults the high non-stationary character of these signals. That is the reason why we focused our work on (adaptive non-parametric) segmentation algorithms which enable to decompose the signal into stationary segments. The unsupervised approach does not allow to optimize a performance criterion or to identify non detections of false alarms. Consequently we developed a methodology for quantifying a correlation measure between the algorithms we selected and for which we defined the concept of common detection. The problematic is then to maximize the agreement between the algorithms after and optimization process of their parameters
Khemiri, Houssemeddine. "Approche générique appliquée à l'indexation audio par modélisation non supervisée". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0055.
Pełny tekst źródłaThe amount of available audio data, such as broadcast news archives, radio recordings, music and songs collections, podcasts or various internet media is constantly increasing. Therefore many audio indexing techniques are proposed in order to help users to browse audio documents. Nevertheless, these methods are developed for a specific audio content which makes them unsuitable to simultaneously treat audio streams where different types of audio document coexist. In this thesis we report our recent efforts in extending the ALISP approach developed for speech as a generic method for audio indexing, retrieval and recognition. The particularity of ALISP tools is that no textual transcriptions are needed during the learning step. Any input speech data is transformed into a sequence of arbitrary symbols. These symbols can be used for indexing purposes. The main contribution of this thesis is the exploitation of the ALISP approach as a generic method for audio indexing. The proposed system consists of three steps; an unsupervised training to model and acquire the ALISP HMM models, ALISP segmentation of audio data using the ALISP HMM models and a comparison of ALISP symbols using the BLAST algorithm and Levenshtein distance. The evaluations of the proposed systems are done on the YACAST and other publicly available corpora for several tasks of audio indexing
Jaritz, Maximilian. "2D-3D scene understanding for autonomous driving". Thesis, Université Paris sciences et lettres, 2020. https://pastel.archives-ouvertes.fr/tel-02921424.
Pełny tekst źródłaIn this thesis, we address the challenges of label scarcity and fusion of heterogeneous 3D point clouds and 2D images. We adopt the strategy of end-to-end race driving where a neural network is trained to directly map sensor input (camera image) to control output, which makes this strategy independent from annotations in the visual domain. We employ deep reinforcement learning where the algorithm learns from reward by interaction with a realistic simulator. We propose new training strategies and reward functions for better driving and faster convergence. However, training time is still very long which is why we focus on perception to study point cloud and image fusion in the remainder of this thesis. We propose two different methods for 2D-3D fusion. First, we project 3D LiDAR point clouds into 2D image space, resulting in sparse depth maps. We propose a novel encoder-decoder architecture to fuse dense RGB and sparse depth for the task of depth completion that enhances point cloud resolution to image level. Second, we fuse directly in 3D space to prevent information loss through projection. Therefore, we compute image features with a 2D CNN of multiple views and then lift them all to a global 3D point cloud for fusion, followed by a point-based network to predict 3D semantic labels. Building on this work, we introduce the more difficult novel task of cross-modal unsupervised domain adaptation, where one is provided with multi-modal data in a labeled source and an unlabeled target dataset. We propose to perform 2D-3D cross-modal learning via mutual mimicking between image and point cloud networks to address the source-target domain shift. We further showcase that our method is complementary to the existing uni-modal technique of pseudo-labeling
Nasser, Khalafallah Mahmoud Lamees. "A dictionary-based denoising method toward a robust segmentation of noisy and densely packed nuclei in 3D biological microscopy images". Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS283.pdf.
Pełny tekst źródłaCells are the basic building blocks of all living organisms. All living organisms share life processes such as growth and development, movement, nutrition, excretion, reproduction, respiration and response to the environment. In cell biology research, understanding cells structure and function is essential for developing and testing new drugs. In addition, cell biology research provides a powerful tool to study embryo development. Furthermore, it helps the scientific research community to understand the effects of mutations and various diseases. Time-Lapse Fluorescence Microscopy (TLFM) is one of the most appreciated imaging techniques which can be used in live-cell imaging experiments to quantify various characteristics of cellular processes, i.e., cell survival, proliferation, migration, and differentiation. In TLFM imaging, not only spatial information is acquired, but also temporal information obtained by repeating imaging of a labeled sample at specific time points, as well as spectral information, that produces up to five-dimensional (X, Y, Z + Time + Channel) images. Typically, the generated datasets consist of several (hundreds or thousands) images, each containing hundreds to thousands of objects to be analyzed. To perform high-throughput quantification of cellular processes, nuclei segmentation and tracking should be performed in an automated manner. Nevertheless, nuclei segmentation and tracking are challenging tasks due to embedded noise, intensity inhomogeneity, shape variation as well as a weak boundary of nuclei. Although several nuclei segmentation approaches have been reported in the literature, dealing with embedded noise remains the most challenging part of any segmentation algorithm. We propose a novel 3D denoising algorithm, based on unsupervised dictionary learning and sparse representation, that can both enhance very faint and noisy nuclei, in addition, it simultaneously detects nuclei position accurately. Furthermore, our method is based on a limited number of parameters, with only one being critical, which is the approximate size of the objects of interest. The framework of the proposed method comprises image denoising, nuclei detection, and segmentation. In the denoising step, an initial dictionary is constructed by selecting random patches from the raw image then an iterative technique is implemented to update the dictionary and obtain the final one which is less noisy. Next, a detection map, based on the dictionary coefficients used to denoise the image, is used to detect marker points. Afterward, a thresholding-based approach is proposed to get the segmentation mask. Finally, a marker-controlled watershed approach is used to get the final nuclei segmentation result. We generate 3D synthetic images to study the effect of the few parameters of our method on cell nuclei detection and segmentation, and to understand the overall mechanism for selecting and tuning the significant parameters of the several datasets. These synthetic images have low contrast and low signal to noise ratio. Furthermore, they include touching spheres where these conditions simulate the same characteristics exist in the real datasets. The proposed framework shows that integrating our denoising method along with classical segmentation method works properly in the context of the most challenging cases. To evaluate the performance of the proposed method, two datasets from the cell tracking challenge are extensively tested. Across all datasets, the proposed method achieved very promising results with 96.96% recall for the C.elegans dataset. Besides, in the Drosophila dataset, our method achieved very high recall (99.3%)
Huck, Alexis. "Analyse non-supervisée d’images hyperspectrales : démixage linéaire et détection d’anomalies". Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX30036.
Pełny tekst źródłaThis thesis focusses on two research fields regarding unsupervised analysis of hyperspectral images (HSIs). Under the assumptions of the linear spectral mixing model, the formalism of Non-Negative Matrix Factorization is investigated for unmixing purposes. We propose judicious spectral and spatial a priori knowledge to regularize the problem. In addition, we propose an estimator for the projected gradient optimal step-size. Thus, suitably regularized NMF is shown to be a relevant approach to unmix HSIs. Then, the problem of anomaly detection is considered. We propose an algorithm for Anomalous Component Pursuit (ACP), simultaneously based on projection pursuit and on a probabilistic model and hypothesis testing. ACP detects the anomalies with a constant false alarm rate and discriminates them into spectrally homogeneous classes
El, Khoury Elie. "Indexation vidéo non-supervisée basée sur la caractérisation des personnes". Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00515424.
Pełny tekst źródłaSalzenstein, Fabien. "Modele markovien flou et segmentation statistique non suupervisee d'images". Rennes 1, 1996. http://www.theses.fr/1996REN10194.
Pełny tekst źródłaChaari, Anis. "Nouvelle approche d'identification dans les bases de données biométriques basée sur une classification non supervisée". Phd thesis, Université d'Evry-Val d'Essonne, 2009. http://tel.archives-ouvertes.fr/tel-00549395.
Pełny tekst źródłaBecerra, Elcinto Javier. "Contribution à la segmentation supervisée de données volumiques : modèle perceptuel et développement d'outils interactifs d'aide à l'interprétation d'images sismiques". Bordeaux 1, 2006. http://www.theses.fr/2006BOR13328.
Pełny tekst źródłaRousson, Mikaël. "Intégration d'attributs et évolutions de fronts en segmentation d'images". Phd thesis, Université de Nice Sophia-Antipolis, 2004. http://tel.archives-ouvertes.fr/tel-00327560.
Pełny tekst źródłaLa variété des caractéristiques possibles définissant une région d'intérêt est le principal facteur limitant leur généralisation. Ces critères région peuvent être le niveau de gris, la couleur, la texture, la forme des objets, etc...
Dans cette thèse, nous proposons une formulation générale qui permet d'introduire chacune de ces caractéristiques. Plus précisément, nous considérons l'intensité de l'image, la couleur, la texture, le mouvement et enfin, la connaissance a priori sur la forme des objets à extraire. Dans cette optique, nous obtenons un critère probabiliste à partir d'une formulation Bayésienne du problème de la segmentation d'images. Ensuite, une formulation variationnelle équivalente est introduite et la segmentation la plus probable est finalement obtenue par des techniques d'évolutions de fronts. La représentation par ensembles de niveaux est naturellement introduite pour décrire ces évolutions, tandis que les statistiques régions sont estimées en parallèle. Ce cadre de travail permet de traiter naturellement des images scalaires et vectorielles mais des caractéristiques plus complexes sont considérées par la suite. La texture, le mouvement ainsi que l'a priori sur la forme sont traités successivement. Finalement, nous présentons une extention de notre approche aux images de diffusion à résonance magnétique où des champs de densité de probabilité 3D doivent être considérés.