To see the other types of publications on this topic, follow the link: IMAGE SEGMENTATION TECHNIQUES.

Dissertations / Theses on the topic 'IMAGE SEGMENTATION TECHNIQUES'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'IMAGE SEGMENTATION TECHNIQUES.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Duramaz, Alper. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607721/index.pdf.

Full text
Abstract:
Recently, solutions to the problem of image segmentation and denoising are developed based on the Mumford-Shah model. The model provides an energy functional, called the Mumford-Shah functional, which should be minimized. Since the minimization of the functional has some difficulties, approximate approaches are proposed. Two such methods are the gradient flows method and the Chan-Vese active contour method. The performance evolution in terms of speed shows that the gradient flows method converges to the boundaries of the smooth parts faster
but for the hierarchical four-phase segmentation, it is observed that this method sometimes gives unsatisfactory results. In this work, a fast hierarchical four-phase segmentation method is proposed where the Chan-Vese active contour method is applied following the gradient flows method. After the segmentation process, the segmented regions are denoised using diffusion filters. Additionally, for the low signal-to-noise ratio applications, the prefiltering scheme using nonlinear diffusion filters is included in the proposed method. Simulations have shown that the proposed method provides an effective solution to the image segmentation and denoising problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Altinoklu, Metin Burak. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610415/index.pdf.

Full text
Abstract:
In this thesis, the image segmentation methods based on the Mumford&
#8211
Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
APA, Harvard, Vancouver, ISO, and other styles
3

Storve, Sigurd. "Kalman Smoothing Techniques in Medical Image Segmentation." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18823.

Full text
Abstract:
An existing C++ library for efficient segmentation of ultrasound recordings by means of Kalman filtering, the real-time contour tracking library (RCTL), is used as a building block to implement and assess the performance of different Kalman smoothing techniques: fixed-point, fixed-lag, and fixed-interval smoothing. An experimental smoothing technique based on fusion of tracking results and learned mean state estimates at different positions in the heart-cycle is proposed. A set of $29$ recordings with ground-truth left ventricle segmentations provided by a trained medical doctor is used for the performance evaluation.The clinical motivation is to improve the accuracy of automatic left-ventricle tracking, which can be applied to improve the automatic measurement of clinically important parameters such as the ejection fraction. The evaluation shows that none of the smoothing techniques offer significant improvements over regular Kalman filtering. For the Kalman smoothing algorithms, it is argued to be a consequence of the way edge-detection measurements are performed internally in the library. The statistical smoother's lack of improvement is explained by too large interpersonal variations; the mean left-ventricular deformation pattern does not generalize well to individual cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Seemann, Torsten 1973. "Digital image processing using local segmentation." Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Matalas, Ioannis. "Segmentation techniques suitable for medical images." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yeo, Si Yong. "Implicit deformable models for biomedical image segmentation." Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42416.

Full text
Abstract:
In this thesis, new methods for the efficient segmentation of images are presented. The proposed methods are based on the deformable model approach, and can be used efficiently in the segmentation of complex geometries from various imaging modalities. A novel deformable model that is based on a geometrically induced external force field which can be conveniently generalized to arbitrary dimensions is presented. This external force field is based on hypothesized interactions between the relative geometries of the deformable model and the object boundary characterized by image gradient. The evolution of the deformable model is solved using the level set method so that topological changes are handled automatically. The relative geometrical configurations between the deformable model and the object boundaries contributes to a dynamic vector force field that changes accordingly as the deformable model evolves. The geometrically induced dynamic interaction force has been shown to greatly improve the deformable model performance in acquiring complex geometries and highly concave boundaries, and give the deformable model a high invariance in initialization configurations. The voxel interactions across the whole image domain provides a global view of the object boundary representation, giving the external force a long attraction range. The bidirectionality of the external force held allows the new deformable model to deal with arbitrary cross-boundary initializations, and facilitates the handling of weak edges and broken boundaries. In addition, it is shown that by enhancing the geometrical interaction field with a nonlocal edge-preserving algorithm, the new deformable model can effectively overcome image noise. A comparative study on the segmentation of various geometries with different topologies from both synthetic and real images is provided, and the proposed method is shown to achieve significant improvements against several existing techniques. A robust framework for the segmentation of vascular geometries is described. In particular, the framework consists of image denoising, optimal object edge representation, and segmentation using implicit deformable model. The image denoising is based on vessel enhancing diffusion which can be used to smooth out image noise and enhance the vessel structures. The image object boundaries are derived using an edge detection technique which can produce object edges of single pixel width. The image edge information is then used to derive the geometric interaction field for optimal object edge representation. The vascular geometries are segmented using an implict deformable model. A region constraint is added to the deformable model which allows it to easily get around calcified regions and propagate across the vessels to segment the structures efficiently. The presented framework is ai)plied in the accurate segmentation of carotid geometries from medical images. A new segmentation model with statistical shape prior using a variational approach is also presented in this thesis. The proposed model consists of an image attraction force that propagates contours towards image object boundaries, and a global shape force that attracts the model towards similar shapes in the statistical shape distribution. The image attraction force is derived from gradient vector interactions across the whole image domain, which makes the model more robust to image noise, weak edges and initializations. The statistical shape information is incorporated using kernel density estimation, which allows the shape prior model to handle arbitrary shape variations. It is shown that the proposed model with shape prior can be used to segment object shapes from images efficiently.
APA, Harvard, Vancouver, ISO, and other styles
7

Alazawi, Eman. "Holoscopic 3D image depth estimation and segmentation techniques." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/10517.

Full text
Abstract:
Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner.
APA, Harvard, Vancouver, ISO, and other styles
8

Shaffrey, Cian William. "Multiscale techniques for image segmentation, classification and retrieval." Thesis, University of Cambridge, 2003. https://www.repository.cam.ac.uk/handle/1810/272033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sekkal, Rafiq. "Techniques visuelles pour la détection et le suivi d’objets 2D." Thesis, Rennes, INSA, 2014. http://www.theses.fr/2014ISAR0032/document.

Full text
Abstract:
De nos jours, le traitement et l’analyse d’images trouvent leur application dans de nombreux domaines. Dans le cas de la navigation d’un robot mobile (fauteuil roulant) en milieu intérieur, l’extraction de repères visuels et leur suivi constituent une étape importante pour la réalisation de tâches robotiques (localisation, planification, etc.). En particulier, afin de réaliser une tâche de franchissement de portes, il est indispensable de détecter et suivre automatiquement toutes les portes qui existent dans l’environnement. La détection des portes n’est pas une tâche facile : la variation de l’état des portes (ouvertes ou fermées), leur apparence (de même couleur ou de couleur différentes des murs) et leur position par rapport à la caméra influe sur la robustesse du système. D’autre part, des tâches comme la détection des zones navigables ou l’évitement d’obstacles peuvent faire appel à des représentations enrichies par une sémantique adaptée afin d’interpréter le contenu de la scène. Pour cela, les techniques de segmentation permettent d’extraire des régions pseudo-sémantiques de l’image en fonction de plusieurs critères (couleur, gradient, texture…). En ajoutant la dimension temporelle, les régions sont alors suivies à travers des algorithmes de segmentation spatio-temporelle. Dans cette thèse, des contributions répondant aux besoins cités sont présentées. Tout d’abord, une technique de détection et de suivi de portes dans un environnement de type couloir est proposée : basée sur des descripteurs géométriques dédiés, la solution offre de bons résultats. Ensuite, une technique originale de segmentation multirésolution et hiérarchique permet d’extraire une représentation en régions pseudosémantique. Enfin, cette technique est étendue pour les séquences vidéo afin de permettre le suivi des régions à travers le suivi de leurs contours. La qualité des résultats est démontrée et s’applique notamment au cas de vidéos de couloir
Nowadays, image processing remains a very important step in different fields of applications. In an indoor environment, for a navigation system related to a mobile robot (electrical wheelchair), visual information detection and tracking is crucial to perform robotic tasks (localization, planning…). In particular, when considering passing door task, it is essential to be able to detect and track automatically all the doors that belong to the environment. Door detection is not an obvious task: the variations related to the door status (open or closed), their appearance (e.g. same color as the walls) and their relative position to the camera have influence on the results. On the other hand, tasks such as the detection of navigable areas or obstacle avoidance may involve a dedicated semantic representation to interpret the content of the scene. Segmentation techniques are then used to extract pseudosemantic regions based on several criteria (color, gradient, texture...). When adding the temporal dimension, the regions are tracked then using spatiotemporal segmentation algorithms. In this thesis, we first present joint door detection and tracking technique in a corridor environment: based on dedicated geometrical features, the proposed solution offers interesting results. Then, we present an original joint hierarchical and multiresolution segmentation framework able to extract a pseudo-semantic region representation. Finally, this technique is extended to video sequences to allow the tracking of regions along image sequences. Based on contour motion extraction, this solution has shown relevant results that can be successfully applied to corridor videos
APA, Harvard, Vancouver, ISO, and other styles
10

Celik, Mehmet Kemal. "Digital image segmentation using periodic codings." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/80099.

Full text
Abstract:
Digital image segmentation using periodic codings is explored with reference to two applications. First, the application of uniform periodic codings, to the problem of segmenting the in-focus regions in an image from the blurred parts, is discussed. The work presented in this part extends a previous investigation on this subject by considering the leakage effects. The method proposed consists of two stages. In each stage, filtering is done in the spatial frequency domain after uniform grating functions are applied to the images in the spatial domain. Then, algorithms for finding the period and phase of a physical grating are explored for a hybrid optical-digital application of the method. Second, a model for textures as the linear superposition of periodic narrowband components, defined as tones, is proposed. A priori information about the number of the tones, their spatial frequencies, and coefficients is necessary to generate tone and texture indicators. Tone indicators are obtained by filtering the image with complex analytical functions defined by the spatial frequencies of the tones present in the image. A criterion for choosing the dimensions of the filter is also provided. Texture indicators are then generated for each texture in the image by applying the a priori information of the tonal coefficients to the filtered images. Several methods for texture segmentation which employ texture indicators are proposed. Finally, examples which illustrate the characteristics of the method are presented.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

López, Mir Fernando. "Advanced techniques in medical image segmentation of the liver." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/59428.

Full text
Abstract:
[EN] Image segmentation is, along with multimodal and monomodal registration, the operation with the greatest applicability in medical image processing. There are many operations and filters, as much as applications and cases, where the segmentation of an organic tissue is the first step. The case of liver segmentation in radiological images is, after the brain, that on which the highest number of scientific publications can be found. This is due, on the one hand, to the need to continue innovating in existing algorithms and, on the other hand, to the applicability in many situations related to diagnosis, treatment and monitoring of liver diseases but also for clinical planning. In the case of magnetic resonance imaging (MRI), only in recent years some solutions have achieved good results in terms of accuracy and robustness in the segmentation of the liver. However these algorithms are generally not user-friendly. In the case of computed tomography (CT) scans more methodologies and solutions have been developed but it is difficult to find a good trade-off between accuracy and practical clinical use. To improve the state-of-the-art in both cases (MRI and CT), a common methodology to design and develop two liver segmentation algorithms in those imaging modalities has been proposed in this thesis. The second step has been the validation of both algorithms. In the case of CT images, there exist public databases with images segmented manually by experts that the scientific community uses as a common link for the validation and comparison of their algorithms. The validation is done by obtaining certain coefficients of similarity between the manual and the automatic segmentation. This way of validating the accuracy of the algorithm has been followed in this thesis, except in the case of magnetic resonance imaging because, at present, there are no databases publicly available. In this case, there aren't public or accessible images. Accordingly, a private database has been created where several expert radiologists have manually segmented different studies of patients that have been used as a reference. This database is composed by 17 studies (with more than 1,500 images), so the validation of this method in MRI is one of the more extensive currently published. In the validation stage, an accuracy above 90% in the Jaccard and Dice coefficients has been achieved. The vast majority of the compared authors achieves similar values. However, in general, the algorithms proposed in this thesis are more user-friendly for clinical environments because the computational cost is lower, the clinical interaction is non-existent and it is not necessary an initiation in the case of the magnetic resonance algorithm and a small initiation (it is only necessary to introduce a manual seed) for the computed tomography algorithm. In this thesis, a third hypothesis that makes use of the results of liver segmentation in MRI combined to augmented reality algorithms has also been developed. Specifically, a real and innocuous study, non-invasive for clinician and patient has been designed and validated through it has been shown that the use of this technology creates benefits in terms of greater accuracy and less variability versus the non-use in a particular case of laparoscopic surgery.
[ES] La segmentación de imágenes es, junto al registro multimodal y monomodal, la operación con mayor aplicabilidad en tratamiento digital de imagen médica. Son multitud las operaciones y filtros, así como las aplicaciones y casuística, que derivan de una segmentación de un tejido orgánico. El caso de segmentación del hígado en imágenes radiológicas es, después del cerebro, la que mayor número de publicaciones científicas podemos encontrar. Esto es debido por un lado a la necesidad de seguir innovando en los algoritmos ya existentes y por otro a la gran aplicabilidad que tiene en muchas situaciones relacionadas con el diagnóstico, tratamiento y seguimiento de patologías hepáticas pero también para la planificación clínica de las mismas. En el caso de imágenes de resonancia magnética, sólo en los últimos años han aparecido soluciones que consiguen buenos resultados en cuanto a precisión y robustez en la segmentación del hígado. Sin embargo dichos algoritmos, por lo general son poco utilizables en el ambiente clínico. En el caso de imágenes de tomografía computarizada encontramos mucha más variedad de metodologías y soluciones propuestas pero es difícil encontrar un equilibrio entre precisión y uso práctico clínico. Es por ello que para mejorar el estado del arte en ambos casos (imágenes de resonancia magnética y tomografía computarizada) en esta tesis se ha planteado una metodología común a la hora de diseñar y desarrollar sendos algoritmos de segmentación del hígado en las citadas modalidades de imágenes anatómicas. El segundo paso ha sido la validación de ambos algoritmos. En el caso de imágenes de tomografía computarizada existen bases de datos públicas con imágenes segmentadas manualmente por expertos y que la comunidad científica suele utilizar como nexo común a la hora de validar y posteriormente comparar sus algoritmos. La validación se hace mediante la obtención de determinados coeficientes de similitud entre la imagen segmentada manualmente por los expertos y las que nos proporciona el algoritmo. Esta forma de validar la precisión del algoritmo ha sido la seguida en esta tesis, con la salvedad que en el caso de imágenes de resonancia magnética no existen bases de datos de acceso público. Por ello, y para este caso, lo que se ha hecho es la creación previa de una base de datos propia donde diferentes expertos radiólogos han segmentado manualmente diferentes estudios de pacientes con el fin de que puedan servir como referencia y se pueda seguir la misma metodología que en el caso anterior. Dicha base de datos ha hecho posible que la validación se haga en 17 estudios (con más de 1.500 imágenes), lo que convierte la validación de este método de segmentación del hígado en imágenes de resonancia magnética en una de las más extensas publicadas hasta la fecha. La validación y posterior comparación han dejado patente una precisión superior al 90% reflejado en el coeficiente de Jaccard y Dice, muy en consonancia con valores publicados por la inmensa mayoría de autores que se han podido comparar. Sin embargo, y en general, los algoritmos planteados en esta tesis han obtenido unos criterios de uso mucho mayores, ya que en general presentan menores costes de computación, una interacción clínica casi nula y una iniciación nula en el caso del algoritmo de resonancia magnética y casi nula en el caso de algoritmos de tomografía computarizada. En esta tesis, también se ha abordado un tercer punto que hace uso de los resultados obtenidos en la segmentación del hígado en imágenes de resonancia magnética. Para ello, y haciendo uso de algoritmos de realidad aumentada, se ha diseñado y validado un estudio real inocuo y no invasivo para el clínico y para el paciente donde se ha demostrado que la utilización de esta tecnología reporta mayores beneficios en cuanto a mayor precisión y menor variabilidad frente a su no uso en un caso concreto de ciru
[CAT] La segmentació d'imatges és, al costat del registre multimodal i monomodal, l'operació amb major aplicabilitat en tractament digital d'imatge mèdica. Són multitud les operacions i filtres, així com les aplicacions i casuística, que comencen en la segmentació d'un teixit orgànic. El cas de segmentació del fetge en imatges radiològiques és, després del cervell, la que major nombre de publicacions científiques podem trobar. Això és degut per una banda a la necessitat de seguir innovant en els algoritmes ja existents i per un altre a la gran aplicabilitat que té en moltes situacions relacionades amb el diagnòstic, tractament i seguiment de patologies hepàtiques però també per a la planificació clínica de les mateixes. En el cas d'imatges de ressonància magnètica, només en els últims anys han aparegut solucions que aconsegueixen bons resultats quant a precisió i robustesa en la segmentació del fetge. No obstant això aquests algoritmes, en general són poc utilitzables en l'ambient clínic. En el cas d'imatges de tomografia computeritzada trobem molta més varietat de metodologies i solucions proposades però és difícil trobar un equilibri entre precisió i ús pràctic clínic. És per això que per millorar l'estat de l'art en els dos casos (imatges de ressonància magnètica i tomografia computeritzada) en aquesta tesi s'ha plantejat una metodologia comuna a l'hora de dissenyar i desenvolupar dos algoritmes de segmentació del fetge en les esmentades modalitats d'imatges anatòmiques. El segon pas ha estat la validació de tots dos algoritmes. En el cas d'imatges de tomografia computeritzada hi ha bases de dades públiques amb imatges segmentades manualment per experts i que la comunitat científica sol utilitzar com a nexe comú a l'hora de validar i posteriorment comparar els seus algoritmes. La validació es fa mitjançant l'obtenció de determinats coeficients de similitud entre la imatge segmentada manualment pels experts i les que ens proporciona l'algoritme. Aquesta forma de validar la precisió de l'algoritme ha estat la seguida en aquesta tesi, amb l'excepció que en el cas d'imatges de ressonància magnètica no hi ha bases de dades d'accés públic. Per això, i per a aquest cas, el que s'ha fet és la creació prèvia d'una base de dades pròpia on diferents experts radiòlegs han segmentat manualment diferents estudis de pacients amb la finalitat que puguen servir com a referència i es puga seguir la mateixa metodologia que en el cas anterior. Aquesta base de dades ha fet possible que la validació es faja en 17 estudis (amb més de 1.500 imatges), cosa que converteix la validació d'aquest mètode de segmentació del fetge en imatges de ressonància magnètica en una de les més extenses publicades fins a la data. La validació i posterior comparació han deixat patent una precisió superior al 90 \% reflectit en el coeficient de \ textit {Jaccard} i \ textit {Dice}, molt d'acord amb valors publicats per la immensa majoria d'autors en que s'ha pogut comparar. No obstant això, i en general, els algoritmes plantejats en aquesta tesi han obtingut uns criteris d'ús molt més grans, ja que en general presenten menors costos de computació, una interacció clínica quasi nul·la i una iniciació nul·la en el cas de l'algoritme de ressonància magnètica i quasi nul·la en el cas d'algoritmes de tomografia computeritzada. En aquesta tesi, també s'ha abordat un tercer punt que fa ús dels resultats obtinguts en la segmentació del fetge en imatges de ressonància magnètica. Per a això, i fent ús d'algoritmes de realitat augmentada, s'ha dissenyat i validat un estudi real innocu i no invasiu per al clínic i per al pacient on s'ha demostrat que la utilització d'aquesta tecnologia reporta més beneficis pel que fa a major precisió i menor variabilitat enfront del seu no ús en un cas concret de cirurgia amb laparoscòpia.
López Mir, F. (2015). Advanced techniques in medical image segmentation of the liver [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59428
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
12

Kerwin, Matthew. "Comparison of Traditional Image Segmentation Techniques and Geostatistical Threshold." Thesis, James Cook University, 2006. https://eprints.qut.edu.au/99764/1/kerwin-honours-thesis.pdf.

Full text
Abstract:
A general introduction to image segmentation is provided, including a detailed description of common classic techniques: Otsu’s threshold, k-means and fuzzy c-means clustering; and suggestions of ways in which these techniques have been subsequently modified for special situations. Additionally, a relatively new approach is described, which attempts to address certain exposed failings of the classic techniques listed by incorporating a spatial statistical analysis technique commonly used in geological studies. Results of different segmentation techniques are calculated for various images, and evaluated and compared, with deficiencies explained and suggestions for improvements made.
APA, Harvard, Vancouver, ISO, and other styles
13

Karmakar, Gour Chandra 1970. "An integrated fuzzy rule-based image segmentation framework." Monash University, Gippsland School of Computing and Information Technology, 2002. http://arrow.monash.edu.au/hdl/1959.1/8752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dokladal, Petr. "Grey-scale image segmentation : a topological approach." Marne-la-Vallée, 2000. http://www.theses.fr/2000MARN0065.

Full text
Abstract:
Le domaine aborde par cette these concerne la segmentation des images en niveaux de gris. Deux approches principales et independantes ont ete exploitees : l'application des graphes a l'operateur h-minima pour une segmentation des images multi-classes et la segmentation basee sur la topologie appliquee a l'angiographie du foie par tomographie. Une presentation des modeles mathematiques existants, utilises pour modeliser la topologie est faite dans un premier temps. Les modeles presentes sont discutes du point de vue de leurs applications a la segmentation d'images. La methode des h-minima est une methode de morphologie mathematique. Etant concue pour segmenter des images bi-classes, les images multi-classes peuvent etre segmentees par application des h-minima a l'image de gradient. L'extraction du gradient entraine toutefois une perte inevitable d'information dans l'image. Une modification des h-minima fondee sur des graphes ponderes d'adjacence de regions est etudiee. L'extraction du gradient devenant superflue, cette approche permet l'application des h-minima aux images multi-classes. La segmentation d'angiographie 3-d de foie, proposee dans la derniere partie de ce manuscrit, est fondee sur la topologie du systeme vasculaire du foie. La veine porte ayant une structure sans cycles, il est possible de concevoir des techniques de segmentation orientees topologie garantissant un resultat topologiquement correct. Deux methodes duales de segmentation et une methode de squelettisation ont ete proposees. Une methode de filtrage de squelette, fondee sur la meme approche, a ete etudiee afin d'ameliorer le resultat
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Xiaobing. "Automatic image segmentation based on level set approach: application to brain tumor segmentation in MR images." Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001120.pdf.

Full text
Abstract:
L'objectif de la thèse est de développer une segmentation automatique des tumeurs cérébrales à partir de volumes IRM basée sur la technique des « level sets ». Le fonctionnement «automatique» de ce système utilise le fait que le cerveau normal est symétrique et donc la localisation des régions dissymétriques permet d'estimer le contour initial de la tumeur. La première étape concerne le prétraitement qui consiste à corriger l'inhomogénéité de l'intensité du volume IRM et à recaler spatialement les volumes d'IRM d'un même patient à différents instants. Le plan hémisphérique du cerveau est recherché en maximisant le degré de similarité entre la moitié du volume et de sa réflexion. Le contour initial de la tumeur est ainsi extrait à partir de la dissymétrie entre les deux hémisphères. Ce contour initial est évolué et affiné par une technique de « level set » afin de trouver le contour réel de la tumeur. Les critères d'arrêt de l'évolution ont été proposés en fonction des propriétés de la tumeur. Finalement, le contour de la tumeur est projetée sur les images adjacentes pour former les nouveaux contours initiaux. Ce traitement est itéré sur toutes les coupes pour obtenir la segmentation de la tumeur en 3D. Le système ainsi réalisé est utilisé pour suivre un patient pendant toute la période thérapeutique, avec des examens tous les quatre mois, ce qui permet au médecin de contrôler l'état de développement de la tumeur et ainsi d'évaluer l'efficacité du traitement thérapeutique. La méthode a été évaluée quantitativement par la comparaison avec des tracés manuels des experts. De bons résultats sont obtenus sur des images réelles IRM
The aim of this dissertation is to develop an automatic segmentation of brain tumors from MRI volume based on the technique of "level sets". The term "automatic" uses the fact that the normal brain is symmetrical and the localization of asymmetrical regions permits to estimate the initial contour of the tumor. The first step is preprocessing, which is to correct the intensity inhomogeneity of volume MRI and spatially realign the MRI volumes of the same patient at different moments. The plan hemispherical brain is then calculated by maximizing the degree of similarity between the half of the volume and his reflexion. The initial contour of the tumor can be extracted from the asymmetry between the two hemispheres. This initial contour is evolved and refined by the technique "level set" in order to find the real contour of the tumor. The criteria for stopping the evolution have been proposed and based on the properties of the tumor. Finally, the contour of the tumor is projected onto the adjacent images to form the new initial contours. This process is iterated on all slices to obtain the segmentation of the tumor in 3D. The proposed system is used to follow up patients throughout the medical treatment period, with examinations every four months, allowing the physician to monitor the state of development of the tumor and evaluate the effectiveness of the therapy. The method was quantitatively evaluated by comparison with manual tracings experts. Good results are obtained on real MRI images
APA, Harvard, Vancouver, ISO, and other styles
16

Abdulhadi, Abdulwanis Abdalla. "Evaluation of spot welding electrodes using digital image processing and image segmentation techniques." Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589998.

Full text
Abstract:
The image segmentation algorithm is the most challenging step and requires more computer processing power than the boundary filtering, and the Cullen et al's method, which used the Cullen et al's method to determine the electrodes tip width automatically in the automotive industry in real time. Spot welding is used extensively in the automotive industry. The quality of an individual spot weld is a major concern due to the heavy reliance on their use in the manufacture of motor vehicles. The main parameters that control the quality of a spot weld are current, voltage, welding force, welding time, and the quality of welding electrodes. The condition of the welding electrodes plays a major part in determining the quality of a spot weld. For example, excessive electrode wear can occur during the welding process and can cause weakening in the weld nuggets. As the number of welds increases, the electrode tip wears down and so the contact area between electrode tip and work piece increases. In order to determine the quality of the welding electrodes, a machine vision approach is employed, where images of the electrode tips in real time are captured and are processed using various image-processing algorithms. These algorithms can be used to automatically measure the electrode tip width and hence assess the quality of the electrodes tip in real time. The quality of two types of spot welding electrode tips, namely flat-shaped and dome-shaped tips, is assessed here using image processing techniques. For each tip type, a database of 250 images is used to test the performance of the tested algorithms. Also the tip width in these 250 images is determined manually by counting the number of pixels using an image editor such as Microsoft Paint. An excellent agreement is found between these manual and automatic methods. The tip width for an electrode is measured by first grabbing an image showing the electrode. The electrode in the image is then extracted using an image segmentation algorithm. Then the boundary of the electrode is determined and filtered. The Cullen et aI's method is subsequently applied, which uses the filtered boundary to determine the tip width. A number of image segmentation and boundary filtering algorithms have been used to determine the tip width automatically. For flat tip electrode, the combination of the region growing image segmentation, Minimum Perimeter Polygon, and Cull en et al's techniques was capable of automatically determining the tip width for 250 images with a root mean square error of 7.5 % of the tip width. For dome-shaped electrodes, the combination of the Snake segmentation algorithm, Fourier transform, and the Cullen et al's method was capable of automatically determining the tip width for 250 images with a root mean square error of2.9 % of the tip width. The author has proposed and built an active illumination system that captures a backlit image of the electrode's shadow, this system has different camera with same time then above. The image is then processed using a simple image segmentation method, such as the Canny filtering algorithm to locate the boundary of the electrodes tip. Then the boundary is processed using Minimum-Perimeter Polygon approach and Cull en et aI's method to automatically determine the tip width for 200 experiments images. The proposed system is capable of determining the tip width automatically with a root mean square error of 3.2% of the total tip width for flat tips and 3% for dome tips.
APA, Harvard, Vancouver, ISO, and other styles
17

Pan, Jianjia. "Image segmentation based on the statistical and contour information." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/1004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gomes, Vicente S. A. "Global optimisation techniques for image segmentation with higher order models." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1334450/.

Full text
Abstract:
Energy minimisation methods are one of the most successful approaches to image segmentation. Typically used energy functions are limited to pairwise interactions due to the increased complexity when working with higher-order functions. However, some important assumptions about objects are not translatable to pairwise interactions. The goal of this thesis is to explore higher order models for segmentation that are applicable to a wide range of objects. We consider: (1) a connectivity constraint, (2) a joint model over the segmentation and the appearance, and (3) a model for segmenting the same object in multiple images. We start by investigating a connectivity prior, which is a natural assumption about objects. We show how this prior can be formulated in the energy minimisation framework and explore the complexity of the underlying optimisation problem, introducing two different algorithms for optimisation. This connectivity prior is useful to overcome the “shrinking bias” of the pairwise model, in particular in interactive segmentation systems. Secondly, we consider an existing model that treats the appearance of the image segments as variables. We show how to globally optimise this model using a Dual Decomposition technique and show that this optimisation method outperforms existing ones. Finally, we explore the current limits of the energy minimisation framework. We consider the cosegmentation task and show that a preference for object-like segmentations is an important addition to cosegmentation. This preference is, however, not easily encoded in the energy minimisation framework. Instead, we use a practical proposal generation approach that allows not only the inclusion of a preference for object-like segmentations, but also to learn the similarity measure needed to define the cosegmentation task. We conclude that higher order models are useful for different object segmentation tasks. We show how some of these models can be formulated in the energy minimisation framework. Furthermore, we introduce global optimisation methods for these energies and make extensive use of the Dual Decomposition optimisation approach that proves to be suitable for this type of models.
APA, Harvard, Vancouver, ISO, and other styles
19

Su, Qi, and 蘇琦. "Segmentation and reconstruction of medical images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41897067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Masek, Martin. "Hierarchical segmentation of mammograms based on pixel intensity." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2003.0033.

Full text
Abstract:
Mammography is currently used to screen women in targeted risk classes for breast cancer. Computer assisted diagnosis of mammograms attempts to lower the workload on radiologists by either automating some of their tasks or acting as a second reader. The task of mammogram segmentation based on pixel intensity is addressed in this thesis. The mammographic process leads to images where intensity in the image is related to the composition of tissue in the breast; it is therefore possible to segment a mammogram into several regions using a combination of global thresholds, local thresholds and higher-level information based on the intensity histogram. A hierarchical view is taken of the segmentation process, with a series of steps that feed into each other. Methods are presented for segmentation of: 1. image background regions; 2. skin-air interface; 3. pectoral muscle; and 4. segmentation of the database by classification of mammograms into tissue types and determining a similarity measure between mammograms. All methods are automatic. After a detailed analysis of minimum cross-entropy thresholding, multi-level thresholding is used to segment the main breast tissue from the background. Scanning artefacts and high intensity noise are separated from the breast tissue using binary image operations, rectangular labels are identified from the binary image by their shape, the Radon transform is used to locate the edges of tape artefacts, and a filter is used to locate vertical running roller scratching. Orientation of the image is determined using the shape of the breast and properties of the breast tissue near the breast edge. Unlike most existing orientation algorithms, which only distinguish between left facing or right facing breasts, the algorithm developed determines orientation for images flipped upside down or rotated onto their side and works successfully on all images of the testing database. Orientation is an integral part of the segmentation process, as skin-air interface and pectoral muscle extraction rely on it. A novel way to view the skin-line on the mammogram is as two sets of functions, one set with the x-axis along the rows, and the other with the x-axis along the columns. Using this view, a local thresholding algorithm, and a more sophisticated optimisation based algorithm are presented. Using fitted polynomials along the skin-air interface, the error between polynomial and breast boundary extracted by a threshold is minimised by optimising the threshold and the degree of the polynomial. The final fitted line exhibits the inherent smoothness of the polynomial and provides a more accurate estimate of the skin-line when compared to another established technique. The edge of the pectoral muscle is a boundary between two relatively homogenous regions. A new algorithm is developed to obtain a threshold to separate adjacent regions distinguishable by intensity. Taking several local windows containing different proportions of the two regions, the threshold is found by examining the behaviour of either the median intensity or a modified cross-entropy intensity as the proportion changes. Image orientation is used to anchor the window corner in the pectoral muscle corner of the image and straight-line fitting is used to generate a more accurate result from the final threshold. An algorithm is also presented to evaluate the accuracy of different pectoral edge estimates. Identification of the image background and the pectoral muscle allows the breast tissue to be isolated in the mammogram. The density and pattern of the breast tissue is correlated with 1. Breast cancer risk, and 2. Difficulty of reading for the radiologist. Computerised density assessment methods have in the past been feature-based, a number of features extracted from the tissue or its histogram and used as input into a classifier. Here, histogram distance measures have been used to classify mammograms into density types, and ii also to order the image database according to image similarity. The advantage of histogram distance measures is that they are less reliant on the accuracy of segmentation and the quality of extracted features, as the whole histogram is used to determine distance, rather than quantifying it into a set of features. Existing histogram distance measures have been applied, and a new histogram distance presented, showing higher accuracy than other such measures, and also better performance than an established feature-based technique.
APA, Harvard, Vancouver, ISO, and other styles
21

Rousson, Mikaël. "Cue integration and front evolution in image segmentation." Nice, 2004. http://www.theses.fr/2004NICE4100.

Full text
Abstract:
La détection et l'extraction automatique de régions d'intérêt à l'intérieur d'une image est une étape primordiale pour la compréhension d'images. Une multitude d'études dédiées à ce problème ont été proposées durant les dix dernières années. Cependant, la plupart d'entre eux introduisent des heuristiques propres au type d'image considéré. La variété des caractéristiques possibles définissant une région d'intérêt est le principal facteur limitant leur généralisation. Dans cette thèse, nous proposons une formulation générale qui permet d'introduire chacune de ces caractéristiques. Plus précisément, nous considérons l'intensité de l'image, la couleur, la texture, le mouvement et enfin, la connaissance à priori sur la forme des objets à extraire. Dans cette optique, nous obtenons un critère probabiliste à partir d'une formulation Bayésienne du problème de la segmentation d'images. Ensuite, une formulation variationnelle équivalente est introduite et la segmentation la plus probable est finalement obtenue par des techniques d'évolutions de fronts. La représentation par ensembles de niveaux est naturellement introduite pour décrire ces évolutions, tandis que les statistiques régions sont estimées en parallèle. Ce cadre de travail permet de traiter naturellement des images scalaires et vectorielles mais des caractéristiques plus complexes sont considérées par la suite. La texture, le mouvement ainsi que l'à priori sur la forme sont traités successivement. Finalement, nous présentons une extende notre approche aux images de diffusion à résonance magnétique où des champs de densité de probabilité 3D doivent être considérés
Automatic detection and selection of regions of interest is a key step in image understanding. In the literature, most segmentation approaches are restricted to a particular class of images. This limitation is due to the large variety of cues available to characterize a region of interest. Targeting particular applications, algorithms are centered on the from most relevant cue. The limiting factor to obtain a general algorithm is the large variety of cues available to characterize a region of interest. It can be gray-level, color, texture, shape, etc. . . In this thesis, we propose a general formulation able to deal with each one of these characteristics. Image intensity, color, texture, motion and prior shape knowledge are considered. For this purpose, a probabilistic inference is obtained from a Bayesian formulation of the segmentation problem. Then, reformulated as an energy minimization, the most probable image partition is obtained using front evolution techniques. Level-set functions are naturally introduced to represent the evolving fronts while region statistics are optimized in parallel. This framework can naturally handle scalar and vector-valued smooth images but more complex cues are also integrated. Texture and motion features, as well as prior shape knowledge are successively introduced. Complex medical images are considered in the last part with the case of diffusion magnetic resonance images which gives 3D probability density fields
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Sam J. "Low bit-rate image and video compression using adaptive segmentation and quantization." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Huang, Guo Heng. "On-line video object segmentation using superpixel approach." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Zhuo, and 陳卓. "A split-and-merge approach for quadrilateral-based image segmentation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38749440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tan, Zhigang, and 譚志剛. "A region merging methodology for color and texture image segmentation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43224143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Stein, Andrew Neil. "Adaptive image segmentation and tracking : a Bayesian approach." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Vergés, Llahí Jaume. "Color Constancy and Image Segmentation Techniques for Applications to Mobile Robotics." Doctoral thesis, Universitat Politècnica de Catalunya, 2005. http://hdl.handle.net/10803/6189.

Full text
Abstract:
Aquesta Tesi que pretén proporcionar un conjunt de tècniques per enfrontar-se al problema que suposa la variació del color en les imatges preses des d'una plataforma mòbil per causa del canvi en les condicions d'il·luminació entre diverses vistes d'una certa escena preses en diferents instants i posicions. També tracta el problema de la segmentació de imatges de color per a poder-les utilitzar en tasques associades a les capacitats d'un robot mòbil, com ara la identificació d'objectes o la recuperació d'imatges d'una gran base de dades.

Per dur a terme aquests objectius, primerament s'estableix matemàticament la transformació entre colors degut a variacions d'il·luminació. Així es proposa un model continu per la generació del senyal de color com a generalització natural d'altres propostes anteriors. D'aquesta manera es pot estudiar matemàticament i amb generalitat les condicions per l'existència, unicitat i bon comportament de les solucions, i expressar qualsevol tipus d'aplicació entre colors, independentment del tipus de discretització. Així, queda palès la relació íntima entre el problema de la invariància de color i el de la recuperació espectral, que també es planteja a la pràctica. El model desenvolupat es contrasta numèricament amb els de regressió lineal, en termes d'errors de predicció.

Un cop establert el model general, s'opta per un model lineal simplificat a l'hora de realitzar els càlculs pràctics i permet alleugerir el nombre dels mateixos. En particular, el mètode proposat es basa en trobar la transformació més probable entre dues imatges a partir del càlcul d'un conjunt de transformacions possibles i de l'estimació de la freqüència i grau d'efectivitat de cadascuna d'elles. Posteriorment, es selecciona el millor candidat d'acord amb la seva versemblança. L'aplicació resultant serveix per transformar els colors de la imatge tal i com es veuria sota les condicions d'il·luminació canòniques.

Una vegada el color de les imatges d'una mateixa escena es manté constant, cal procedir a la seva segmentació per extreure'n la informació corresponent a les regions amb color homogeni. En aquesta Tesi es suggereix un algorisme basat en la partició de l'arbre d'expansió mínima d'una imatge mitjançant una mesura local de la probabilitat de les unions entre components. La idea és arribar a una segmentació coherent amb les regions reals, compromís entre particions amb moltes components (sobresegmentades) i amb molt poques (subsegmentades).

Un altre objectiu és que l'algorisme sigui prou ràpid com per ser útil en aplicacions de robòtica mòbil. Aquesta característica s'assoleix amb un plantejament local del creixement de regions, tot i que el resultat presenti caràcters globals (color). La possible sobresegmentació es suavitza gràcies al factor probabilístic introduït.

L'algorisme de segmentació també hauria de generar segmentacions estables en el temps. Així, l'algorisme referit s'ha ampliat incloent-hi un pas intermedi entre segmentacions que permet de relacionar regions semblants en imatges diferents i propagar cap endavant els reagrupaments de regions fets en anteriors imatges, així si en una imatge unes regions s'agrupen formant-ne una de sola, les regions corresponents en la imatge següent també s'han d'agrupar juntes. D'aquesta manera, dues segmentacions correlatives s'assemblen i es pot mantenir estable la segmentació d'una seqüència.

Finalment, es planteja el problema de comparar imatges a partir del seu contingut. Aquesta Tesi es concentra només en la informació de color i, a més de investigar la millor distància entre segmentacions, es busca també mostrar com la invariància de color afecta les segmentacions.

Els resultats obtinguts per cada objectiu proposat en aquesta Tesi avalen els punts de vista defensats, i mostren la utilitat dels algorismes, així com el model de color tant per la recuperació espectral com pel càlcul explícit de les transformacions entre colors.
This Thesis endeavors providing a set of techniques for facing the problem of color variation in images taken from a mobile platform and caused by the change in the conditions of lighting among several views of a certain scene taken at different instants and positions. It also treats the problem of segmenting color images in order to use them in tasks associated with the capacities of a mobile robot, such as object identification or image retrieval from a large database.

In order to carry out these goals, first transformation among colors due to light variations is mathematically established. Thus, a continuous model for the generation of color is proposed as a natural generalization of other former models. In this way, conditions for the existence, uniqueness, and good behavior of the solutions can be mathematically studied with a great generality, and any type of applications among colors can be expressed independently of the discretization scheme applied. Thus, the intimate relation among the problem of color invariance and that of spectral recovery is made evident and studied in practice too. The developed model is numerically contrasted with those of a least squares linear regression in terms of prediction errors.

Once the general model is established, a simplified linear version is chosen instead for carrying out the practical calculations while lightening the number of them. In particular, the proposed method is based on finding the likeliest transformation between two images from the calculation of a set of feasible transformations and the estimation of the frequency and the effectiveness degree of each of them. Later, the best candidate is selected in accordance with its likelihood. The resulting application is then able to transform the image colors as they would be seen under the canonical light.

After keeping the image colors from a scene constant, it is necessary to proceed to their segmentation to extract information corresponding to regions with homogeneous colors. In this Thesis, an algorithm based on the partition of the minimum spanning tree of an image through a local measure of the likelihood of the unions among components is suggested. The idea is to arrive at a segmentation coherent with the real regions, a trade-off between partitions with many component (oversegmented) and those with fewer components (subsegmented).

Another goal is that of obtaining an algorithm fast enough to be useful in applications of mobile robotics. This characteristic is attained by a local approach to region growing, even though the result still shows global feature (color). The possible oversegmentation is softened thanks to a probabilistic factor.

The segmentation algorithm should also generate stable segmentations through time. Thus, the aforementioned algorithm has been widened by including an intermediate step that allows to relate similar regions in different images and to propagate forwards the regrouping of regions made in previous images. This way, if in some image some regions are grouped forming only one bigger region, the corresponding regions in the following image will also be grouped together. In this way, two correlatives segmentations resemble each other, keeping the whole segmented sequence stabler.

Finally, the problem of comparing images via their content is also studied in this Thesis, focusing on the color information and, besides investigating which is for our aims the best distance between segmentation, also showing how color constancy affects segmentations. The results obtained in each of the goals proposed in this Thesis guarantee the exposed points of view, and show the utility of the algorithms suggested, as well as the color model for the spectral recovery and the explicit calculation of the transformations among colors.
APA, Harvard, Vancouver, ISO, and other styles
28

Awadallah, Mahmoud Sobhy Tawfeek. "Image Analysis Techniques for LiDAR Point Cloud Segmentation and Surface Estimation." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73055.

Full text
Abstract:
Light Detection And Ranging (LiDAR), as well as many other applications and sensors, involve segmenting sparse sets of points (point clouds) for which point density is the only discriminating feature. The segmentation of these point clouds is challenging for several reasons, including the fact that the points are not associated with a regular grid. Moreover, the presence of noise, particularly impulsive noise with varying density, can make it difficult to obtain a good segmentation using traditional techniques, including the algorithms that had been developed to process LiDAR data. This dissertation introduces novel algorithms and frameworks based on statistical techniques and image analysis in order to segment and extract surfaces from sparse noisy point clouds. We introduce an adaptive method for mapping point clouds onto an image grid followed by a contour detection approach that is based on an enhanced version of region-based Active Contours Without Edges (ACWE). We also proposed a noise reduction method using Bayesian approach and incorporated it, along with other noise reduction approaches, into a joint framework that produces robust results. We combined the aforementioned techniques with a statistical surface refinement method to introduce a novel framework to detect ground and canopy surfaces in micropulse photon-counting LiDAR data. The algorithm is fully automatic and uses no prior elevation or geographic information to extract surfaces. Moreover, we propose a novel segmentation framework for noisy point clouds in the plane based on a Markov random field (MRF) optimization that we call Point Cloud Densitybased Segmentation (PCDS). We also developed a large synthetic dataset of in plane point clouds that includes either a set of randomly placed, sized and oriented primitive objects (circle, rectangle and triangle) or an arbitrary shape that forms a simple approximation for the LiDAR point clouds. The experiment performed on a large number of real LiDAR and synthetic point clouds showed that our proposed frameworks and algorithms outperforms the state-of-the-art algorithms in terms of segmentation accuracy and surface RMSE.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Xiangbo. "Knowledge-based image segmentation using deformable registration: application to brain MRI images." Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001121.pdf.

Full text
Abstract:
L'objectif de la thèse est de contribuer au recalage élastique d'images médicales intersujet-intramodalité, ainsi qu’à la segmentation d'images 3D IRM du cerveau dans le cas normal. L’algorithme des démons qui utilise les intensités des images pour le recalage est d’abord étudié. Une version améliorée est proposée en introduisant une nouvelle équation de calcul des forces pour résoudre des problèmes de recalages dans certaines régions difficiles. L'efficacité de la méthode est montrée sur plusieurs évaluations à partir de données simulées et réelles. Pour le recalage intersujet, une méthode originale de normalisation unifiant les informations spatiales et des intensités est proposée. Des contraintes topologiques sont introduites dans le modèle de déformation, visant à obtenir un recalage homéomorphique. La proposition est de corriger les points de déplacements ayant des déterminants jacobiens négatifs. Basée sur le recalage, une segmentation des structures internes est étudiée. Le principe est de construire une ontologie modélisant le connaissance a-priori de la forme des structures internes. Les formes sont représentées par une carte de distance unifiée calculée à partir de l'atlas de référence et celui déformé. Cette connaissance est injectée dans la mesure de similarité de la fonction de coût de l'algorithme. Un paramètre permet de balancer les contributions des mesures d'intensités et de formes. L'influence des différents paramètres de la méthode et des comparaisons avec d'autres méthodes de recalage ont été effectuées. De très bon résultats sont obtenus sur la segmentation des différentes structures internes du cerveau telles que les noyaux centraux et hippocampe
The research goal of this thesis is a contribution to the intra-modality inter-subject non-rigid medical image registration and the segmentation of 3D brain MRI images in normal case. The well-known Demons non-rigid algorithm is studied, where the image intensities are used as matching features. A new force computation equation is proposed to solve the mismatch problem in some regions. The efficiency is shown through numerous evaluations on simulated and real data. For intensity based inter-subject registration, normalizing the image intensities is important for satisfying the intensity correspondence requirements. A non-rigid registration method combining both intensity and spatial normalizations is proposed. Topology constraints are introduced in the deformable model to preserve an expected property in homeomorphic targets registration. The solution comes from the correction of displacement points with negative Jacobian determinants. Based on the registration, a segmentation method of the internal brain structures is studied. The basic principle is represented by ontology of prior shape knowledge of target internal structure. The shapes are represented by a unified distance map computed from the atlas and the deformed atlas, and then integrated into the similarity metric of the cost function. A balance parameter is used to adjust the contributions of the intensity and shape measures. The influence of different parameters of the method and comparisons with other registration methods were performed. Very good results are obtained on the segmentation of different internal structures of the brain such as central nuclei and hippocampus
APA, Harvard, Vancouver, ISO, and other styles
30

Gundersen, Henrik Mogens, and Bjørn Fossan Rasmussen. "An Application of Image Processing Techniques for Enhancement and Segmentation of Bruises in Hyperspectral Images." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9594.

Full text
Abstract:

Hyperspectral images contain vast amounts of data which can provide crucial information to applications within a variety of scientific fields. Increasingly powerful computer hardware has made it possible to efficiently treat and process hyperspectral images. This thesis is interdisciplinary and focuses on applying known image processing algorithms to a new problem domain, involving bruises on human skin in hyperspectral images. Currently, no research regarding image detection of bruises on human skin have been uncovered. However, several articles have been written on hyperspectral bruise detection on fruits and vegetables. Ratio, difference and principal component analysis (PCA) were commonly applied enhancement algorithms within this field. The three algorithms, in addition to K-means clustering and the watershed segmentation algorithm, have been implemented and tested through a batch application developed in C# and MATLAB. The thesis seeks to determine if the enhancement algorithms can be applied to improve bruise visibility in hyperspectral images for visual inspection. In addition, it also seeks to answer if the enhancements provide a better segmentation basis. Known spectral characteristics form the experimentation basis in addition to identification through visual inspection. To this end, a series of experiments were conducted. The tested algorithms provided a better description of the bruises, the extent of the bruising, and the severity of damage. However, the algorithms tested are not considered robust for consistency of results. It is therefore recommended that the image acquisition setup is standardised for all future hyperspectral images. A larger, more varied data set would increase the statistical power of the results, and improve test conclusion validity. Results indicate that the ratio, difference, and principal component analysis (PCA) algorithms can enhance bruise visibility for visual analysis. However, images that contained weakly visible bruises did not show significant improvements in bruise visibility. Non-visible bruises were not made visible using the enhancement algorithms. Results from the enhancement algorithms were segmented and compared to segmentations of the original reflectance images. The enhancement algorithms provided results that gave more accurate bruise regions using K-means clustering and the watershed segmentation. Both segmentation algorithms gave the overall best results using principal components as input. Watershed provided less accurate segmentations of the input from the difference and ratio algorithms.

APA, Harvard, Vancouver, ISO, and other styles
31

Gopalan, Sowmya. "Estimating Columnar Grain Size in Steel-Weld Images using Image Processing Techniques." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250621610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Martin, Vincent. "Cognitive vision : supervised learning for image and video segmentation." Nice, 2007. http://www.theses.fr/2007NICE4067.

Full text
Abstract:
Dans cette thèse, nous abordons le problème de la segmentation d'image et de vidéos par une approche cognitive de la vision. Plus précisément, nous étudions deux problèmes majeurs dans les systèmes de vision : la sélection d'un algorithme et le réglage de ses paramètres selon le contenu de l'image et les besoin de l'application. Nous proposons une méthodologie reposant sur des techniques d'apprentissage pour faciliter la configuration des algorithmes et adapter en continu la tâche de segmentation. Notre première contribution est une procédure d'optimization générique pour l'extraction automatiquement des paramètres optimaux des algorithmes. L'évaluation de la qualité de la segmentation est faite suivant une segmentation de référence. De cette manière, la tâche de l'utilisateur est réduite à fournir des données de référence pour des images d'apprentissage, comme des segmentations manuelles. Une seconde contribution est une stratégie pour le problème de sélection d'algorithme. Cette stratégie repose sur un jeu d'images d'apprentissage représentatif du problème. La première partie utilise le résultat de l'étape d'optimisation pour classer les algorithmes selon leurs valeurs de performance pour chaque image. La seconde partie consiste à identifier différentes situations à partir du jeu d'images d'apprentissage (modélisation du contexte) et à associer un algorithme paramétré avec chaque situation identifiée. Une troisième contribution est une approche sémantique pour la segmentation d'image. Dans cette approche, nous combinons le résultat des segmentations optimisées avec un processus d'étiquetage des régions. Les labels des régions sont données par des classificateurs de régions eux-même entrainés à partir d'exemples annotés par l'utilisateur. Une quatrième contribution est l'implémentation de l'approche et le développement d'un outil graphique dédié à l'extraction, l'apprentissage, et l'utilisation de la connaissance pour la segmentation (modélisation et apprentissage du contexte pour la sélection dynamique d'algorithme de segmentation, optimization automatique des paramètres, annotations des régions et apprentissage des classifieurs). Nous avons testé notre approche sur deux applications réelles : une application biologique (comptage d'insectes sur des feuilles de rosier) et une application de video surveillance. Pour la première application, la segmentation des insectes obtenue par notre approche est de meilleure qualité qu'une segmentation non-adaptative et permet donc au système de vision de compter les insectes avec une meilleure précision. Pour l'application de video surveillance, la principal contribution de l'approche proposée se situe au niveau de la modélisation du contexte, permettant d'adapter le choix d'un modèle de fond suivant les caractéristiques spatio-temporelles de l'image. Notre approche permet ainsi aux applications de video surveillance d'élargir leur champ d'application aux environnement fortement variables comme les très longues séquences (plusieurs heures) en extérieur. Afin de montrer le potentiel et les limites de notre approche, nous présentons les résultats, une évaluation quantitative et une comparaison avec des segmentations non-adaptatvie
In this thesis, we address the problem of image and video segmentation with a cognitive vision approach. More precisely, we study two major issues of the segmentation task in vision systems: the selection of an algorithm and the tuning of its free parameters according to the image contents and the application needs. We propose a learning-based methodology to easily set up the algorithms and continuously adapt the segmentation task. Our first contribution is a generic optimization procedure to automatically extract optimal algorithm parameters. The evaluation of the segmentation quality is done w. R. T. Reference segmentation. In this way, the user task is reduced to provide reference data of training images, as manual segmentations. A second contribution is a twofold strategy for the algorithm selection issue. This strategy relies on a training image set representative of the problem. The first part uses the results of the optimization stage to perform a global ranking of algorithm performance values. The second part consists in identifying different situations from the training image set and then to associate a tuned segmentation algorithm with each situation. A third contribution is a semantic approach to image segmentation. In this approach, we combine the result from the previously (bootom-up) optimized segmentations to a region labelling process. Regions labels are given by region classifiers which are trained from annotated samples. A fourth contribution is the implementation of the approach and the development of a graphical tool currently able to carry out the learning of segmentation knowledge (context modelling and learning, automatic parameter optimization, region annotations, region classifier training, and algorithm selection) and to use this knowledge to perform adaptive segmentation. We have tested our approach on two real-world applications: a biological application (pests counting on rose leaves), and video surveillance applications. For the first one, the proposed adaptive segmentation approach over performs a non-adaptive segmentation in terms of segmentation quality and thus allows the vision system to count the pests with a better precision. For the video application, the main contribution of the proposed approach takes place at the context modelling level. By achieving dynamic background model selection based on spatio-temporal context analysis, our approach allows to enlarge the scope of surveillance applications to high variable environments (e. G. , outdoor sequences of several hours)
APA, Harvard, Vancouver, ISO, and other styles
33

Gover, Tobin. "Low bit rate imaging coding based on segmentation and vector techniques." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Batista, Neto Joao Do Espirito Santo. "Techniques for computer-based anatomical segmentation of the brain using MRI." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kang, Jung Won. "Effective temporal video segmentation and content-based audio-visual video clustering." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/13731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tran, Minh Tue. "Pixel and patch based texture synthesis using image segmentation." University of Western Australia. School of Computer Science and Software Engineering, 2010. http://theses.library.uwa.edu.au/adt-WU2010.0030.

Full text
Abstract:
[Truncated abstract] Texture exists all around us and serves as an important visual cue for the human visual system. Captured within an image, we identify texture by its recognisable visual pattern. It carries extensive information and plays an important role in our interpretation of a visual scene. The subject of this thesis is texture synthesis, which is de ned as the creation of a new texture that shares the fundamental visual characteristics of an existing texture such that the new image and the original are perceptually similar. Textures are used in computer graphics, computer-aided design, image processing and visualisation to produce realistic recreations of what we see in the world. For example, the texture on an object communicates its shape and surface properties in a 3D scene. Humans can discriminate between two textures and decide on their similarity in an instant, yet, achieving this algorithmically is not a simple process. Textures range in complexity and developing an approach that consistently synthe- sises this immense range is a dfficult problem to solve and motivates this research. Typically, texture synthesis methods aim to replicate texture by transferring the recognisable repeated patterns from the sample texture to synthesised output. Feature transferal can be achieved by matching pixels or patches from the sample to the output. As a result, two main approaches, pixel-based and patch-based, have es- tablished themselves in the active eld of texture synthesis. This thesis contributes to the present knowledge by introducing two novel texture synthesis methods. Both methods use image segmentation to improve synthesis results. ... The sample is segmented and the boundaries of the middle patch are confined to follow segment boundaries. This prevents texture features from being cut o prematurely, a common artifact of patch-based results, and eliminates the need for patch boundary comparisons that most other patch- based synthesis methods employ. Since no user input is required, this method is simple and straight-forward to run. The tiling of pre-computed tile pairs allows outputs that are relatively large to the sample size to be generated quickly. Output results show great success for textures with stochastic and semi-stochastic clustered features but future work is needed to suit more highly structured textures. Lastly these two texture synthesis methods are applied to the areas of image restoration and image replacement. These two areas of image processing involve replacing parts of an image with synthesised texture and are often referred to as constrained texture synthesis. Images can contain a large amount of complex information, therefore replacing parts of an image while maintaining image fidelity is a difficult problem to solve. The texture synthesis approaches and constrained synthesis implementations proposed in this thesis achieve successful results comparable with present methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Jayasuriya, Surani Anuradha. "Application of Symmetry Information in Magnetic Resonance Brain Image Segmentation." Thesis, Griffith University, 2013. http://hdl.handle.net/10072/366576.

Full text
Abstract:
Advances in neuroimaging techniques have facilitated the study of anatomical and functional changes in the brain. In order to assist precise diagnosis and treatment, automatic image analysis methods that provide quantitative measures are of great research interests. Accurate brain tissue segmentation of images has been one of the most important research areas for several years. It is an important initial step in neuroimage analysis for applications such as diagnosis of various brain diseases, treatment planning, and studies of various neurological disorders such as Alzheimer’s disease, Schizophrenia, and Multiple sclerosis (MS). However, all these potential applications are crucially dependent on the high accuracy of brain tissue segmentation. Accurate segmentation of MR brain images is difficult since these images contain various noise artifacts. Despite the extensive research, automated analysis of neuroimages still remains a challenging problem. Recently, attention has been turned towards integration of prior knowledge based on anatomical features to improve the accuracy. Based on the fact that the brain exhibits a high level of bilateral symmetry, in this thesis, I explore and discuss the importance of symmetry in the context of tissue classification in MRI, and develop a symmetry-based paradigm for automatic segmentation of brain tissues. Such a classification is motivated by potential radiological applications in assessing brain tissue volume, diagnosis of various brain diseases and treatment planning. The aim of this work is two-fold: First, identifying the location of the symmetry axis or, the symmetry plane becomes imperative. Accurate identification is crucial as it is valuable for the correction of possible misalignment of radiological scans and for symmetry evaluation. In the second stage, automatic classification of brain tissues is done. In other words, the first part of this research focuses on finding the symmetry axis/plane, and the second part develops a segmentation method based on symmetry information.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
38

Surma, David Ray 1963. "Design and performance evaluation of parallel architectures for image segmentation processing." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277042.

Full text
Abstract:
The design of parallel architectures to perform image segmentation processing is given. In addition, the various designs are evaluated as to their performance, and a discussion of an optimal design is given. In this thesis, a set of eight segmentation algorithms has been provided as a starting point. Four of these algorithms will be evaluated and partitioned using two techniques. From this study of partitioning and considering the data flow through the total system, architectures utilizing parallel techniques will be derived. Timing analysis using pen and paper techniques will be given on the architectures using three of today's current technologies. Next, NETWORK II.5 simulations will be run to provide performance measures. Finally, evaluations of the various architectures will be made as well as the applicability of using NETWORK II.5 as a simulation language.
APA, Harvard, Vancouver, ISO, and other styles
39

Sandhu, Romeil Singh. "Statistical methods for 2D image segmentation and 3D pose estimation." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37245.

Full text
Abstract:
The field of computer vision focuses on the goal of developing techniques to exploit and extract information from underlying data that may represent images or other multidimensional data. In particular, two well-studied problems in computer vision are the fundamental tasks of 2D image segmentation and 3D pose estimation from a 2D scene. In this thesis, we first introduce two novel methodologies that attempt to independently solve 2D image segmentation and 3D pose estimation separately. Then, by leveraging the advantages of certain techniques from each problem, we couple both tasks in a variational and non-rigid manner through a single energy functional. Thus, the three theoretical components and contributions of this thesis are as follows: Firstly, a new distribution metric for 2D image segmentation is introduced. This is employed within the geometric active contour (GAC) framework. Secondly, a novel particle filtering approach is proposed for the problem of estimating the pose of two point sets that differ by a rigid body transformation. Thirdly, the two techniques of image segmentation and pose estimation are coupled in a single energy functional for a class of 3D rigid objects. After laying the groundwork and presenting these contributions, we then turn to their applicability to real world problems such as visual tracking. In particular, we present an example where we develop a novel tracking scheme for 3-D Laser RADAR imagery. However, we should mention that the proposed contributions are solutions for general imaging problems and therefore can be applied to medical imaging problems such as extracting the prostate from MRI imagery
APA, Harvard, Vancouver, ISO, and other styles
40

Kouzana, Amira. "Conception d'un cadre d'optimisation de fonctions d'énergies : application au traitement d'images." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1121/document.

Full text
Abstract:
Nous proposons une nouvelle formulation de minimisation de fonctions d’énergies pour la traitement de la vision sur toute la segmentation d'image. Le problème est modélisé comme étant un jeu stratégique non coopératif, et le processus d'optimisation est interprété comme étant la recherche de l'équilibre de nash. Ce problème reste un problème combinatoire sous cette forme d'où nous avons opté à le résoudre en utilisant un algorithme de Séparation-Évaluation. Pour illustrer la performance de la nouvelle approche, nous l'avons appliqué sur des fonctions de régularisation convexe ainsi que non convexe
We propose a new formulation of the energy minimisation paradigm for image segmentation. The segmentation problem is modeled as a non-cooperative strategic game, and the optimization process is interpreted as the search of a Nash equilibrium. The problem is expressed as a combinatorial problem, for which an efficient Branch and Bound algorithm is proposed to solve the problem exactly. To illustrate the performance of the proposed framework, it is applied on convex regularization model, as well as a non-convex regularized segmentation models
APA, Harvard, Vancouver, ISO, and other styles
41

Gao, Yi. "Geometric statistically based methods for the segmentation and registration of medical imagery." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/39644.

Full text
Abstract:
Medical image analysis aims at developing techniques to extract information from medical images. Among its many sub-fields, image registration and segmentation are two important topics. In this report, we present four pieces of work, addressing different problems as well as coupling them into a unified framework of shape based image segmentation. Specifically: 1. We link the image registration with the point set registration, and propose a globally optimal diffeomorphic registration technique for point set registration. 2. We propose an image segmentation technique which incorporates the robust statistics of the image and the multiple contour evolution. Therefore, the method is able to simultaneously extract multiple targets from the image. 3. By combining the image registration, statistical learning, and image segmentation, we perform a shape based method which not only utilizes the image information but also the shape knowledge. 4. A multi-scale shape representation based on the wavelet transformation is proposed. In particular, the shape is represented by wavelet coefficients in a hierarchical way in order to decompose the shape variance in multiple scales. Furthermore, the statistical shape learning and shape based segmentation is performed under such multi-scale shape representation framework.
APA, Harvard, Vancouver, ISO, and other styles
42

Jamrozik, Michele Lynn. "Spatio-temporal segmentation in the compressed domain." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Deveau, Matthieu. "Utilisation conjointe de données image et laser pour la segmentation et la modélisation 3D." Paris 5, 2006. http://www.theses.fr/2006PA05S001.

Full text
Abstract:
Le sujet de cette thèse est l’automatisation de la modélisation 3D de scènes complexes par combinaison d’une image numérique haute résolution et de données laser. Ces travaux s’articulent autour de trois principaux thèmes : l’estimation de pose, la segmentation et la modélisation. L’estimation de pose s’appuie sur l’appariement de points d’intérêts par corrélation entre image numérique et image d’intensité laser, et sur l’extraction d’éléments linéaires. La segmentation géométrique des données a été étudiée en s’appuyant sur le principe de segmentation hiérarchique, ce qui permet de combiner les données dans le processus de segmentation. Un outil de saisie semi-automatique basé sur la structure hiérarchique est proposé. La modélisation a été abordée au niveau des surfaces construites à partir de profils quelconques. On obtient ainsi une description de la scène en plans, cylindres à profils quelconques, surfaces de révolution et maillage, pour les parties complexes
This thesis deals with combining a digital image with laser data for complex scenes 3D modeling automation. Image and laser data are acquired from the same point of view, with a greater image resolution than the laser one. This work is structured around three main topics pose estimation, segmentation and modeling. Pose estimation is based both on feature points matching, between the digital image and the laser intensity image, and on linear feature extraction. Data segmentation into geometric features is done through a hierarchical segmentation scheme, where image and laser data are combined. 3D modeling automation is studied through this hierarchical scheme. A tool for semi-automated modeling is also derived from the hierarchical segmentation architecture. In the modeling step, we have focused on automatic modeling of cylinders with free-form profiles. The description is then very general, with planes, freeform profile cylinders, revolution objects, and meshes on complex parts
APA, Harvard, Vancouver, ISO, and other styles
44

Noyel, Guillaume. "Filtrage, réduction de dimension, classification et segmentation morphologique hyperspectrale." Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://pastel.archives-ouvertes.fr/pastel-00004473.

Full text
Abstract:
Le traitement d'images hyperspectrales est la généralisation de l'analyse des images couleurs, à trois composantes rouge, vert et bleu, aux images multivariées à plusieurs dizaines ou plusieurs centaines de composantes. Dans un sens général, les images hyperspectrales ne sont pas uniquement acquises dans le domaine des longueurs d'ondes mais correspondent à une description d'un pixel par un ensemble de valeurs : c'est à dire un vecteur. Chacune des composantes d'une image hyperspectrale constitue un canal spectral, et le vecteur associé à chaque pixel est appelé spectre. Pour valider la généralité de nos méthodes de traitement, nous les avons appliquées à plusieurs types d'imagerie correspondant aux images hyperspectrales les plus variées : des photos avec quelques dizaines de composantes acquises dans le domaine des longueurs d'ondes, des images satellites de télédétection, des séries temporelles d'imagerie par résonance dynamique (DCE-MRI) et des séries temporelles d'imagerie thermique. Durant cette thèse, nous avons développé une chaîne complète de segmentation automatique des images hyperspectrales par des techniques morphologiques. Pour ce faire, nous avons mis au point une méthode efficace de débruitage spectral, par Analyse Factorielle des Correspondances (AFC), qui permet de conserver les contours spatiaux des objets, ce qui est très utile pour la segmentation morphologique. Puis nous avons fait de la réduction de dimension, par des méthodes d'analyse de données ou par modélisation des spectres, afin d'obtenir un autre représentation de l'image avec un nombre restreint de canaux. A partir de cette image de plus faible dimension, nous avons effectué une classification (supervisée ou non) pour grouper les pixels en classes spectralement homogènes. Cependant, les classes obtenues n'étant pas homogènes spatialement, i.e. connexes, une étape de segmentation s'est donc avérée nécessaire. Nous avons démontré que la méthode récente de la Ligne de Partage des Eaux Probabiliste était particulièrement adaptée à la segmentation des images hyperspectrales. Elle utilise différentes réalisations de marqueurs aléatoires, conditionnés par la classification spectrale, pour obtenir des réalisations de contours par Ligne de Partage des Eaux (LPE). Ces réalisations de contours permettent d'estimer une fonction de densité de probabilité de contours (pdf) qui est très facile à segmenter par une LPE classique. En définitive, la LPE probabiliste est conditionnée par la classification spectrale et produit donc des segmentations spatio-spectrales dont les contours sont très lisses. Cette chaîne de traitement à été mise en œuvre sur des séquences d'imagerie par résonance magnétique dynamique (DCE-MRI) et a permis d'établir une méthode automatique d'aide au diagnostic pour la détection de tumeurs cancéreuses. En outre, d'autres techniques de segmentation spatio-spectrales ont été développées pour les images hyperspectrales : les régions η-bornées et les boules µ-géodésiques. Grâce à l'introduction d'information régionale, elles améliorent les segmentations par zones quasi-plates qui n'utilisent quant à elles que de l'information locale. Enfin, nous avons mis au point une méthode très efficace de calcul de toutes les paires de distances géodésiques d'une image, puisqu'elle permet de réduire jusqu'à 50 % le nombre d'opérations par rapport à une approche naïve et jusqu'à 30 % par rapport aux autres méthodes. Le calcul efficace de ce tableau de distances offre des perspectives très prometteuses pour la réduction de dimension spatio-spectrale.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Xiaofang. "Graph based approaches for image segmentation and object tracking." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0007/document.

Full text
Abstract:
Cette thèse est proposée en deux parties. Une première partie se concentre sur la segmentation d’image. C’est en effet un problème fondamental pour la vision par ordinateur. En particulier, la segmentation non supervisée d’images est un élément important dans de nombreux algorithmes de haut niveau et de systèmes d’application. Dans cette thèse, nous proposons trois méthodes qui utilisent la segmentation d’images se basant sur différentes méthodes de graphes qui se révèlent être des outils puissants permettant de résoudre ces problèmes. Nous proposons dans un premier temps de développer une nouvelle méthode originale de construction de graphe. Nous analysons également différentes méthodes similaires ainsi que l’influence de l’utilisation de divers descripteurs. Le type de graphe proposé, appelé graphe local/global, encode de manière adaptative les informations sur la structure locale et globale de l’image. De plus, nous réalisons un groupement global en utilisant une représentation parcimonieuse des caractéristiques des superpixels sur le dictionnaire de toutes les caractéristiques en résolvant un problème de minimisation l0. De nombreuses expériences sont menées par la suite sur la base de données , et la méthode proposée est comparée avec des algorithmes classiques de segmentation. Les résultats démontrent que notre méthode peut générer des partitions visuellement significatives, mais aussi que des résultats quantitatifs très compétitifs sont obtenus en comparaison des algorithmes usuels. Dans un deuxième temps, nous proposons de travailler sur une méthode reposant sur un graphe d’affinité discriminant, qui joue un rôle essentiel dans la segmentation d’image. Un nouveau descripteur, appelé patch pondéré par couleur, est développé pour calculer le poids des arcs du graphe d’affinité. Cette nouvelle fonctionnalité est en mesure d’intégrer simultanément l’information sur la couleur et le voisinage en représentant les pixels avec des patchs de couleur. De plus, nous affectons à chaque pixel une pondération à la fois local et globale de manière adaptative afin d’atténuer l’effet trop lisse lié à l’utilisation de patchs. Des expériences approfondies montrent que notre méthode est compétitive par rapport aux autres méthodes standards à partir de plusieurs paramètres d’évaluation. Finalement, nous proposons une méthode qui combine superpixels, représentation parcimonieuse, et une nouvelle caractéristisation de mi-niveau pour décrire les superpixels. Le nouvelle caractérisation de mi-niveau contient non seulement les mêmes informations que les caractéristiques initiales de bas niveau, mais contient également des informations contextuelles supplémentaires. Nous validons la caractéristisation de mi-niveau proposée sur l’ensemble de données MSRC et les résultats de segmentation montrent des améliorations à la fois qualitatives et quantitatives par rapport aux autres méthodes standards. Une deuxième partie se concentre sur le suivi d’objets multiples. C’est un domaine de recherche très actif, qui est d’une importance majeure pour un grand nombre d’applications, par exemple la vidéo-surveillance de piétons ou de véhicules pour des raisons de sécurité ou l’identification de motifs de mouvements animaliers
Image segmentation is a fundamental problem in computer vision. In particular, unsupervised image segmentation is an important component in many high-level algorithms and practical vision systems. In this dissertation, we propose three methods that approach image segmentation from different angles of graph based methods and are proved powerful to address these problems. Our first method develops an original graph construction method. We also analyze different types of graph construction method as well as the influence of various feature descriptors. The proposed graph, called a local/global graph, encodes adaptively the local and global image structure information. In addition, we realize global grouping using a sparse representation of superpixels’ features over the dictionary of all features by solving a l0-minimization problem. Extensive experiments are conducted on the Berkeley Segmentation Database, and the proposed method is compared with classical benchmark algorithms. The results demonstrate that our method can generate visually meaningful partitions, but also that very competitive quantitative results are achieved compared with state-of-the-art algorithms. Our second method derives a discriminative affinity graph that plays an essential role in graph-based image segmentation. A new feature descriptor, called weighted color patch, is developed to compute the weight of edges in an affinity graph. This new feature is able to incorporate both color and neighborhood information by representing pixels with color patches. Furthermore, we assign both local and global weights adaptively for each pixel in a patch in order to alleviate the over-smooth effect of using patches. The extensive experiments show that our method is competitive compared to the other standard methods with multiple evaluation metrics. The third approach combines superpixels, sparse representation, and a new midlevel feature to describe superpixels. The new mid-level feature not only carries the same information as the initial low-level features, but also carries additional contextual cue. We validate the proposed mid-level feature framework on the MSRC dataset, and the segmented results show improvements from both qualitative and quantitative viewpoints compared with other state-of-the-art methods. Multi-target tracking is an intensively studied area of research and is valuable for a large amount of applications, e.g. video surveillance of pedestrians or vehicles motions for sake of security, or identification of the motion pattern of animals or biological/synthetic particles to infer information about the underlying mechanisms. We propose a detect-then-track framework to track massive colloids’ motion paths in active suspension system. First, a region based level set method is adopted to segment all colloids from long-term videos subject to intensity inhomogeneity. Moreover, the circular Hough transform further refines the segmentation to obtain colloid individually. Second, we propose to recover all colloids’ trajectories simultaneously, which is a global optimal problem that can be solved efficiently with optimal algorithms based on min-cost/max flow. We evaluate the proposed framework on a real benchmark with annotations on 9 different videos. Extensive experiments show that the proposed framework outperforms standard methods with large margin
APA, Harvard, Vancouver, ISO, and other styles
46

Besbes, Ahmed. "Image segmentation using MRFs and statistical shape modeling." Phd thesis, Ecole Centrale Paris, 2010. http://tel.archives-ouvertes.fr/tel-00594246.

Full text
Abstract:
Nous présentons dans cette thèse un nouveau modèle statistique de forme et l'utilisons pour la segmentation d'images avec a priori. Ce modèle est représenté par un champ de Markov. Les noeuds du graphe correspondent aux points de contrôle situés sur le contour de la forme géométrique, et les arêtes du graphe représentent les dépendances entre les points de contrôle. La structure du champ de Markov est déterminée à partir d'un ensemble de formes, en utilisant des techniques d'apprentissage de variétés et de groupement non-supervisé. Les contraintes entre les points sont assurées par l'estimation des fonctions de densité de probabilité des longueurs de cordes normalisées. Dans une deuxième étape, nous construisons un algorithme de segmentation qui intègre le modèle statistique de forme, et qui le relie à l'image grâce à un terme région, à travers l'utilisation de diagrammes de Voronoi. Dans cette approche, un contour de forme déformable évolue vers l'objet à segmenter. Nous formulons aussi un algorithme de segmentation basé sur des détecteurs de points d'intérêt, où le terme de régularisation est lié à l'apriori de forme. Dans ce cas, on cherche à faire correspondre le modèle aux meilleurs points candidats extraits de l'image par le détecteur. L'optimisation pour les deux algorithmes est faite en utilisant des méthodes récentes et efficaces. Nous validons notre approche à travers plusieurs jeux de données en 2D et en 3D, pour des applications de vision par ordinateur ainsi que l'analyse d'images médicales.
APA, Harvard, Vancouver, ISO, and other styles
47

Kolesov, Ivan A. "Statistical methods for coupling expert knowledge and automatic image segmentation and registration." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47739.

Full text
Abstract:
The objective of the proposed research is to develop methods that couple an expert user's guidance with automatic image segmentation and registration algorithms. Often, complex processes such as fire, anatomical changes/variations in human bodies, or unpredictable human behavior produce the target images; in these cases, creating a model that precisely describes the process is not feasible. A common solution is to make simplifying assumptions when performing detection, segmentation, or registration tasks automatically. However, when these assumptions are not satisfied, the results are unsatisfactory. Hence, removing these, often times stringent, assumptions at the cost of minimal user input is considered an acceptable trade-off. Three milestones towards reaching this goal have been achieved. First, an interactive image segmentation approach was created in which the user is coupled in a closed-loop control system with a level set segmentation algorithm. The user's expert knowledge is combined with the speed of automatic segmentation. Second, a stochastic point set registration algorithm is presented. The point sets can be derived from simple user input (e.g. a thresholding operation), and time consuming correspondence labeling is not required. Furthermore, common smoothness assumptions on the non-rigid deformation field are removed. Third, a stochastic image registration algorithm is designed to capture large misalignments. For future research, several improvements of the registration are proposed, and an iterative, landmark based segmentation approach, which couples the segmentation and registration, is envisioned.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Yan. "Image Segmentation and Shape Analysis of Blood Vessels with Applications to Coronary Atherosclerosis." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14577.

Full text
Abstract:
Atherosclerosis is a systemic disease of the vessel wall that occurs in the aorta, carotid, coronary and peripheral arteries. Atherosclerotic plaques in coronary arteries may cause the narrowing (stenosis) or complete occlusion of the arteries and lead to serious results such as heart attacks and strokes. Medical imaging techniques such as X-ray angiography and computed tomography angiography (CTA) have greatly assisted the diagnosis of atherosclerosis in living patients. Analyzing and quantifying vessels in these images, however, is an extremely laborious and time consuming task if done manually. A novel image segmentation approach and a quantitative shape analysis approach are proposed to automatically isolate the coronary arteries and measure important parameters along the vessels. The segmentation method is based on the active contour model using the level set formulation. Regional statistical information is incorporated in the framework through Bayesian pixel classification. A new conformal factor and an adaptive speed term are proposed to counter the problems of contour leakage and narrowed vessels resulting from the conventional geometric active contours. The proposed segmentation framework is tested and evaluated on a large amount of 2D and 3D, including synthetic and real 2D vessels, 2D non-vessel objects, and eighteen 3D clinical CTA datasets of coronary arteries. The centerlines of the vessels are proposed to be extracted using harmonic skeletonization technique based on the level contour sets of the harmonic function, which is the solution of the Laplace equation on the triangulated surface of the segmented vessels. The cross-sectional areas along the vessels can be measured while the centerline is being extracted. Local cross-sectional areas can be used as a direct indicator of stenosis for diagnosis. A comprehensive validation is performed by using digital phantoms and real CTA datasets. This study provides the possibility of fully automatic analysis of coronary atherosclerosis from CTA images, and has the potential to be used in a real clinical setting along with a friendly user interface. Comparing to the manual segmentation which takes approximately an hour for a single dataset, the automatic approach on average takes less than five minutes to complete, and gives more consistent results across datasets.
APA, Harvard, Vancouver, ISO, and other styles
49

Piovano, Jérôme. "Image segmentation and level set method : application to anatomical head model creation." Nice, 2009. http://www.theses.fr/2009NICE4062.

Full text
Abstract:
L'apparition des techniques d'imagerie par résonance magnétique (IRM) à la fin du XXe siècle a révolutionné le monde de la médecine moderne, en permettant de visualiser avec précision l'intérieur de structures anatomiques de manière non invasive. Cette technique d'imagerie a fortement contribué à l'étude du cerveau humain, en permettant de discerner avec précision les différentes structures anatomique de la tête, notament le cortex cérébral. Le discernement de structures anatomiques de la tête porte le nom segmentation, et consiste à ``extraire'' des régions dans les IRMs. Plusieurs méthodes de segmentation existent, et cette thèse portent sur les méthodes à base d'évolution d'hyper-surfaces: une hyper-surface (surface en 3D) est progressivement déformée pour finalement épouser les frontières de la region à segmenter. Un modèle de tête correspond au partitionnement de la tête en plusieurs structures anatomiques préalablement segmentées. Un modèle de tête classique comprends en général 5 structures anatomiques (peau, crane , liquide céphalo rachidien, matière grise, matière blanche), imbriquées les unes dans les autres à la manière de ``poupée russes''. Néanmoins de par la complexité de leurs formes, segmenter ces structures manuellement s'avère pénible et extrêmement difficile. Cette thèse se consacre à la mise en place de nouveaux modèles de segmentation robustes aux altérations d'IRMs, et à l'application de ces modèles pour la création automatique de modèles anatomiques de la tête. Apres avoir brièvement survolé un état de l'art des differentes méthodes de segmentation d'image, deux contributions à la segmentation par évolution d'hypersurface sont proposées. La première constitue une nouvelle représentation et un nouveau schémas numérique pour la méthode des ensembles de niveaux, en utilisant des éléments finis quadrilateraux. Cette representation vise à améliorer la qualité et la robustesse du modèle. La seconde contribution constitue un nouveau modèle de segmentation basé sur des statistiques locales, robuste aux altérations présentes dans les IRMs. Ce nouveau modèle vise à unifier plusieurs modèles ``état de l'art'' en segmentation d'image. Enfin, un cadre pour la création automatique de modèle de tête est proposé, utilisant princpalement le precédent modèle de segmentation par statistiques locales
Magnetic Resonance Images (MRI) have been introduced at the end of the XXth century and have revolutionized the world of modern medicine, allowing to view with precision the inside of anatomical structures in a non-invasive way. This imaging technique has greatly contributed to the study and comprehension of the human brain, allowing to discern with precision the different anatomical structures composing the head, especially the cerebral cortex. Discernment between these anatomical structures is called segmentation, and consist in “extracting” structures of interest from MRIs. Several models exists to perform image segmentation, and this thesis focus on those based on hypersurface evolutions: an hypersurface (surface in 3D) is incrementally adjusted to finally fit the border of the region of interest. A head model corresponds to the partitioning of the head into several segmented anatomical structures. A classic head model generally includes 5 anatomical structures (skin, skull, cerebrospinal fluid, grey matter, white matter), nested inside each other in the manner of “Russian nested dolls”. Nevertheless because of the complexity of their shapes, manual segmentation of these structures is tedious and extremely difficult. This thesis is dedicated to the creation of new segmentation models robust to MRI alterations, and to the application of these models in the purpose of automatic creation of anatomical head models. After briefly reviewing some classical models in image segmentation, two contributions to segmentation based on hypersurface evolution are proposed. The first one corresponds to a new representation and a new numerical scheme for the level-sets method, based on quadrilateral finite elements. This representation aims at improving the accuracy and robustness of the model. The second contribution corresponds to a new segmentation model based on local statistics, and robust to standard MRI alterations. This model aims at unifying several 'state-of-the-art' models in image segmentation. Finally, a framework for automatic creation of anatomical head models is proposed, mainly using the previous local-statistic based segmentation model
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Siwei. "Apport d'un algorithme de segmentation ultra-rapide et non supervisé pour la conception de techniques de segmentation d'images bruitées." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4371.

Full text
Abstract:
La segmentation d'image constitue une étape importante dans le traitement d'image et de nombreuses questions restent ouvertes. Il a été montré récemment, dans le cas d'une segmentation à deux régions homogènes, que l'utilisation de contours actifs polygonaux fondés sur la minimisation d'un critère issu de la théorie de l'information permet d'aboutir à un algorithme ultra-rapide qui ne nécessite ni paramètre à régler dans le critère d'optimisation, ni connaissance a priori sur les fluctuations des niveaux de gris. Cette technique de segmentation rapide et non supervisée devient alors un outil élémentaire de traitement.L'objectif de cette thèse est de montrer les apports de cette brique élémentaire pour la conception de nouvelles techniques de segmentation plus complexes, permettant de dépasser un certain nombre de limites et en particulier :- d'être robuste à la présence dans les images de fortes inhomogénéités ;- de segmenter des objets non connexes par contour actif polygonal sans complexifier les stratégies d'optimisation ;- de segmenter des images multi-régions tout en estimant de façon non supervisée le nombre de régions homogènes présentes dans l'image.Nous avons pu aboutir à des techniques de segmentation non supervisées fondées sur l'optimisation de critères sans paramètre à régler et ne nécessitant aucune information sur le type de bruit présent dans l'image. De plus, nous avons montré qu'il était possible de concevoir des algorithmes basés sur l'utilisation de cette brique élémentaire, permettant d'aboutir à des techniques de segmentation rapides et dont la complexité de réalisation est faible dès lors que l'on possède une telle brique élémentaire
Image segmentation is an important step in many image processing systems and many problems remain unsolved. It has recently been shown that when the image is composed of two homogeneous regions, polygonal active contour techniques based on the minimization of a criterion derived from information theory allow achieving an ultra-fast algorithm which requires neither parameter to tune in the optimized criterion, nor a priori knowledge on the gray level fluctuations. This algorithm can then be used as a fast and unsupervised processing module. The objective of this thesis is therefore to show how this ultra-fast and unsupervised algorithm can be used as a module in the conception of more complex segmentation techniques, allowing to overcome several limits and particularly:- to be robust to the presence of strong inhomogeneity in the image which is often inherent in the acquisition process, such as non-uniform illumination, attenuation, etc.;- to be able to segment disconnected objects by polygonal active contour without complicating the optimization strategy;- to segment multi-region images while estimating in an unsupervised way the number of homogeneous regions in the image.For each of these three problems, unsupervised segmentation techniques based on the optimization of Minimum Description Length criteria have been obtained, which do not require the tuning of parameter by user or a priori information on the kind of noise in the image. Moreover, it has been shown that fast segmentation techniques can be achieved using this segmentation module, while keeping reduced implementation complexity
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography