Dissertations / Theses on the topic 'Image segmentatio'

To see the other types of publications on this topic, follow the link: Image segmentatio.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image segmentatio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zeng, Ziming. "Medical image segmentation on multimodality images." Thesis, Aberystwyth University, 2013. http://hdl.handle.net/2160/17cd13c2-067c-451b-8217-70947f89164e.

Full text
Abstract:
Segmentation is a hot issue in the domain of medical image analysis. It has a wide range of applications on medical research. A great many medical image segmentation algorithms have been proposed, and many good segmentation results were obtained. However, due to the noise, density inhomogenity, partial volume effects, and density overlap between normal and abnormal tissues in medical images, the segmentation accuracy and robustness of some state-of-the-art methods still have room for improvement. This thesis aims to deal with the above segmentation problems and improve the segmentation accuracy. This project investigated medical image segmentation methods across a range of modalities and clinical applications, covering magnetic resonance imaging (MRI) in brain tissue segmentation, MRI based multiple sclerosis (MS) lesions segmentation, histology based cell nuclei segmentation, and positron emission tomography (PET) based tumour detection. For the brain MRI tissue segmentation, a method based on mutual information was developed to estimate the number of brain tissue groups. Then a unsupervised segmentation method was proposed to segment the brain tissues. For the MS lesions segmentation, 2D/3D joint histogram modelling were proposed to model the grey level distribution of MS lesions in multimodality MRI. For the PET segmentation of the head and neck tumours, two hierarchical methods based on improved active contour/surface modelling were proposed to segment the tumours in PET volumes. For the histology based cell nuclei segmentation, a novel unsupervised segmentation based on adaptive active contour modelling driven by morphology initialization was proposed to segment the cell nuclei. Then the segmentation results were further processed for subtypes classification. Among these segmentation approaches, a number of techniques (such as modified bias field fuzzy c-means clustering, multiimage spatially joint histogram representation, and convex optimisation of deformable model, etc.) were developed to deal with the key problems in medical image segmentation. Experiments show that the novel methods in this thesis have great potential for various image segmentation scenarios and can obtain more accurate and robust segmentation results than some state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Hillman, Peter. "Segmentation of motion picture images and image sequences." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/15026.

Full text
Abstract:
For Motion Picture Special Effects, it is often necessary to take a source image of an actor, segment the actor from the unwanted background, and then composite over a new background. The resultant image appears as if the actor was filmed in front of the new background. The standard approach requires the unwanted background to be a blue or green screen. While this technique is capable of handling areas where the foreground (the actor) blends into the background, the physical requirements present many practical problems. This thesis investigates the possibility of segmenting images where the unwanted background is more varied. Standard segmentation techniques tend not to be effective, since motion picture images have extremely high resolution and high accuracy is required to make the result appear convincing. A set of novel algorithms which require minimal human interaction to initialise the processing is presented. These algorithms classify each pixel by comparing its colour to that of known background and foreground areas. They are shown to be effective where there is a sufficient distinction between the colours of the foreground and background. A technique for assessing the quality of an image segmentation in order to compare these algorithms to alternative solutions is presented. Results are included which suggest that in most cases the novel algorithms have the best performance, and that they produce results more quickly than the alternative approaches. Techniques for segmentation of moving images sequences are then presented. Results are included which show that only a few frames of the sequence need to be initialised by hand, as it is often possible to generate automatically the input required to initialise processing for the remaining frames. A novel algorithm which can produce acceptable results on image sequences where more conventional approaches fail or are too slow to be of use is presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Torres, Rafael Siqueira. "Segmentação semiautomática de conjuntos completos de imagens do ventrículo esquerdo." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-17112017-121645/.

Full text
Abstract:
A área médica tem se beneficiado das ferramentas construídas pela Computação e, ao mesmo tempo, tem impulsionado o desenvolvimento de novas técnicas em diversas especialidades da Computação. Dentre estas técnicas a segmentação tem como objetivo separar em uma imagem objetos de interesse, podendo chamar a atenção do profissional de saúde para áreas de relevância ao diagnóstico. Além disso, os resultados da segmentação podem ser utilizados para a reconstrução de modelos tridimensionais, que podem ter características extraídas que auxiliem o médico em tomadas de decisão. No entanto, a segmentação de imagens médicas ainda é um desafio, por ser extremamente dependente da aplicação e das estruturas de interesse presentes na imagem. Esta dissertação apresenta uma técnica de segmentação semiautomática do endocárdio do ventrículo esquerdo em conjuntos de imagens cardíacas de Ressonância Magnética Nuclear. A principal contribuição é a segmentação considerando todas as imagens provenientes de um exame, por meio da propagação dos resultados obtidos em imagens anteriormente processadas. Os resultados da segmentação são avaliados usando-se métricas objetivas como overlap, entre outras, comparando com imagens fornecidas por especialistas na área de Cardiologia
The medical field has been benefited from the tools built by Computing and has promote the development of new techniques in diverse Computer specialties. Among these techniques, the segmentation aims to divide an image into interest objects, leading the attention of the specialist to areas that are relevant in diagnosys. In addition, segmentation results can be used for the reconstruction of three-dimensional models, which may have extracted features that assist the physician in decision making. However, the segmentation of medical images is still a challenge because it is extremely dependent on the application and structures of interest present in the image. This dissertation presents a semiautomatic segmentation technique of the left ventricular endocardium in sets of cardiac images of Nuclear Magnetic Resonance. The main contribution is the segmentation considering all the images coming from an examination, through the propagation of the results obtained in previously processed images. Segmentation results are evaluated using objective metrics such as overlap, among others, compared to images provided by specialists in the Cardiology field
APA, Harvard, Vancouver, ISO, and other styles
4

Murphy, Sean Daniel. "Medical image segmentation in volumetric CT and MR images." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3816/.

Full text
Abstract:
This portfolio thesis addresses several topics in the field of 3D medical image analysis. Automated methods are used to identify structures and points of interest within the body to aid the radiologist. The automated algorithms presented here incorporate many classical machine learning and imaging techniques, such as image registration, image filtering, supervised classification, unsupervised clustering, morphology and probabilistic modelling. All algorithms are validated against manually collected ground truth. Chapter two presents a novel algorithm for automatically detecting named anatomical landmarks within a CT scan, using a linear registration based atlas framework. The novel scans may contain a wide variety of anatomical regions from throughout the body. Registration is typically posed as a numerical optimisation problem. For this problem the associated search space is shown to be non-convex and so standard registration approaches fail. Specialised numerical optimisation schemes are developed to solve this problem with an emphasis placed on simplicity. A semi-automated algorithm for finding the centrelines of coronary arterial trees in CT angiography scans given a seed point is presented in chapter three. This is a modified classical region growing algorithm whereby the topology and geometry of the tree are discovered as the region grows. The challenges presented by the presence of large organs and other extraneous material in the vicinity of the coronary trees is mitigated by the use of an efficient modified 3D top-hat transform. Chapter four compares the accuracy of three unsupervised clustering algorithms when applied to automated tissue classification within the brain on 3D multi-spectral MR images. Chapter five presents a generalised supervised probabilistic framework for the segmentation of structures/tissues in medical images called a spatially varying classifier (SVC). This algorithm leverages off non-rigid registration techniques and is shown to be a generalisation of atlas based techniques and supervised intensity based classification. This is achieved by constructing a multivariate Gaussian classifier for each voxel in a reference scan. The SVC is applied in the context of tissue classification in multi-spectral MR images in chapter six, by simultaneously extracting the brain and classifying the tissues types within it. A specially designed pre-processing pipeline is presented which involves inter-sequence registration, spatial normalisation and intensity normalisation. The SVC is then applied to the problem of multi-compartment heart segmentation in CT angiography data with minimal modification. The accuracy of this method is shown to be comparable to other state of the art methods in the field.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Kyu-Heon. "Segmentation of natural texture images using a robust stochastic image model." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Badiei, Sara. "Prostate segmentation in ultrasound images using image warping and ellipsoid fitting." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31737.

Full text
Abstract:
This thesis outlines an algorithm for 2D and 3D semi-automatic segmentation of the prostate from B-mode trans-rectal ultrasound (TRUS) images. In semi-automatic segmentation, a computer algorithm outlines the boundary of the prostate given a few initialization points. The algorithm is designed for prostate brachytherapy and has the potential to: i) replace pre-operative manual segmentation, ii) enable intra-operative segmentation, and iii) be integrated into a visualization tool for training residents. The segmentation algorithm makes use of image warping to make the 2D prostate boundary elliptical. A Star Kalman based edge detector is then guided along the elliptical shape to find the prostate boundary in the TRUS image. A second ellipse is then fit to the edge detected measurement points. Once all 2D slices are segmented in this manner an ellipsoid is fit to the 3D cloud of points. Finally a reverse warping algorithm gives us the segmented prostate volume. In-depth 2D and 3D clinical studies show promising results. In 2D, distance based metrics show a mean absolute difference of 0.67 ± 0.18mm between manual and semi-automatic segmentation and area based metrics show average sensitivity and accuracy over 97% and 93% respectively. In 3D, i) the difference between manual and semi-automatic segmentation is on the order of interobserver variability, ii) the repeatability of the segmentation algorithm is consistently better than the intra-observer variability, and iii) the sensitivity and accuracy are 97% and 85% respectively. The 3D algorithm requires only 5 initialization points and can segment a prostate volume in less than 10 seconds (approximately 40 times faster than manual segmentation). The novelties of this algorithm, in comparison to other works, are in the warping and ellipse/ ellipsoid fitting steps. These two combine to provide a simple solution that works well even with non-ideal images to produce accurate, real-time results.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xiaobing. "Automatic image segmentation based on level set approach: application to brain tumor segmentation in MR images." Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001120.pdf.

Full text
Abstract:
L'objectif de la thèse est de développer une segmentation automatique des tumeurs cérébrales à partir de volumes IRM basée sur la technique des « level sets ». Le fonctionnement «automatique» de ce système utilise le fait que le cerveau normal est symétrique et donc la localisation des régions dissymétriques permet d'estimer le contour initial de la tumeur. La première étape concerne le prétraitement qui consiste à corriger l'inhomogénéité de l'intensité du volume IRM et à recaler spatialement les volumes d'IRM d'un même patient à différents instants. Le plan hémisphérique du cerveau est recherché en maximisant le degré de similarité entre la moitié du volume et de sa réflexion. Le contour initial de la tumeur est ainsi extrait à partir de la dissymétrie entre les deux hémisphères. Ce contour initial est évolué et affiné par une technique de « level set » afin de trouver le contour réel de la tumeur. Les critères d'arrêt de l'évolution ont été proposés en fonction des propriétés de la tumeur. Finalement, le contour de la tumeur est projetée sur les images adjacentes pour former les nouveaux contours initiaux. Ce traitement est itéré sur toutes les coupes pour obtenir la segmentation de la tumeur en 3D. Le système ainsi réalisé est utilisé pour suivre un patient pendant toute la période thérapeutique, avec des examens tous les quatre mois, ce qui permet au médecin de contrôler l'état de développement de la tumeur et ainsi d'évaluer l'efficacité du traitement thérapeutique. La méthode a été évaluée quantitativement par la comparaison avec des tracés manuels des experts. De bons résultats sont obtenus sur des images réelles IRM
The aim of this dissertation is to develop an automatic segmentation of brain tumors from MRI volume based on the technique of "level sets". The term "automatic" uses the fact that the normal brain is symmetrical and the localization of asymmetrical regions permits to estimate the initial contour of the tumor. The first step is preprocessing, which is to correct the intensity inhomogeneity of volume MRI and spatially realign the MRI volumes of the same patient at different moments. The plan hemispherical brain is then calculated by maximizing the degree of similarity between the half of the volume and his reflexion. The initial contour of the tumor can be extracted from the asymmetry between the two hemispheres. This initial contour is evolved and refined by the technique "level set" in order to find the real contour of the tumor. The criteria for stopping the evolution have been proposed and based on the properties of the tumor. Finally, the contour of the tumor is projected onto the adjacent images to form the new initial contours. This process is iterated on all slices to obtain the segmentation of the tumor in 3D. The proposed system is used to follow up patients throughout the medical treatment period, with examinations every four months, allowing the physician to monitor the state of development of the tumor and evaluate the effectiveness of the therapy. The method was quantitatively evaluated by comparison with manual tracings experts. Good results are obtained on real MRI images
APA, Harvard, Vancouver, ISO, and other styles
8

Horne, Caspar. "Unsupervised image segmentation /." Lausanne : EPFL, 1991. http://library.epfl.ch/theses/?nr=905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhalerao, Abhir. "Multiresolution image segmentation." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/60866/.

Full text
Abstract:
Image segmentation is an important area in the general field of image processing and computer vision. It is a fundamental part of the 'low level' aspects of computer vision and has many practical applications such as in medical imaging, industrial automation and satellite imagery. Traditional methods for image segmentation have approached the problem either from localisation in class space using region information, or from localisation in position, using edge or boundary information. More recently, however, attempts have been made to combine both region and boundary information in order to overcome the inherent limitations of using either approach alone. In this thesis, a new approach to image segmentation is presented that integrates region and boundary information within a multiresolution framework. The role of uncertainty is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade-off between position and class resolution and ensures both robustness in noise and efficiency of computation. The segmentation is based on an image model derived from a general class of multiresolution signal models, which incorporates both region and boundary features. A four stage algorithm is described consisting of: generation of a low-pass pyramid, separate region and boundary estimation processes and an integration strategy. Both the region and boundary processes consist of scale-selection, creation of adjacency graphs, and iterative estimation within a general framework of maximum a posteriori (MAP) estimation and decision theory. Parameter estimation is performed in situ, and the decision processes are both flexible and spatially local, thus avoiding assumptions about global homogeneity or size and number of regions which characterise some of the earlier algorithms. A method for robust estimation of edge orientation and position is described which addresses the problem in the form of a multiresolution minimum mean square error (MMSE) estimation. The method effectively uses the spatial consistency of output of small kernel gradient operators from different scales to produce more reliable edge position and orientation and is effective at extracting boundary orientations from data with low signal-to-noise ratios. Segmentation results are presented for a number of synthetic and natural images which show the cooperative method to give accurate segmentations at low signal-to-noise ratios (0 dB) and to be more effective than previous methods at capturing complex region shapes.
APA, Harvard, Vancouver, ISO, and other styles
10

Draelos, Timothy John 1961. "INTERACTIVE IMAGE SEGMENTATION." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Craske, Simon. "Natural image segmentation." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Salem, Mohammed Abdel-Megeed Mohammed. "Multiresolution image segmentation." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15846.

Full text
Abstract:
Systeme der Computer Vision spielen in der Automatisierung vieler Prozesse eine wichtige Rolle. Die wichtigste Aufgabe solcher Systeme ist die Automatisierung des visuellen Erkennungsprozesses und die Extraktion der relevanten Information aus Bildern oder Bildsequenzen. Eine wichtige Komponente dieser Systeme ist die Bildsegmentierung, denn sie bestimmt zu einem großen Teil die Qualitaet des Gesamtsystems. Fuer die Segmentierung von Bildern und Bildsequenzen werden neue Algorithmen vorgeschlagen. Das Konzept der Multiresolution wird als eigenstaendig dargestellt, es existiert unabhaengig von der Wavelet-Transformation. Die Wavelet-Transformation wird zur Verarbeitung von Bildern und Bildsequenzen zu einer 2D- bzw. 3D-Wavelet- Transformation erweitert. Fuer die Segmentierung von Bildern wird der Algorithmus Resolution Mosaic Expectation Maximization (RM-EM) vorgeschlagen. Das Ergebnis der Vorverarbeitung sind unterschiedlich aufgeloesten Teilbilder, das Aufloesungsmosaik. Durch dieses Mosaik lassen sich raeumliche Korrelationen zwischen den Pixeln ausnutzen. Die Verwendung unterschiedlicher Aufloesungen beschleunigt die Verarbeitung und verbessert die Ergebnisse. Fuer die Extraktion von bewegten Objekten aus Bildsequenzen werden neue Algorithmen vorgeschlagen, die auf der 3D-Wavelet-Transformation und auf der Analyse mit 3D-Wavelet-Packets beruhen. Die neuen Algorithmen haben den Vorteil, dass sie sowohl die raeumlichen als auch die zeitlichen Bewegungsinformationen beruecksichtigen. Wegen der geringen Berechnungskomplexitaet der Wavelet-Transformation ist fuer den ersten Segmentierungsschritt Hardware auf der Basis von FPGA entworfen worden. Aktuelle Anwendungen werden genutzt, um die Algorithmen zu evaluieren: die Segmentierung von Magnetresonanzbildern des menschlichen Gehirns und die Detektion von bewegten Objekten in Bildsequenzen von Verkehrsszenen. Die neuen Algorithmen sind robust und fuehren zu besseren Segmentierungsergebnissen.
More and more computer vision systems take part in the automation of various applications. The main task of such systems is to automate the process of visual recognition and to extract relevant information from the images or image sequences acquired or produced by such applications. One essential and critical component in almost every computer vision system is image segmentation. The quality of the segmentation determines to a great extent the quality of the final results of the vision system. New algorithms for image and video segmentation based on the multiresolution analysis and the wavelet transform are proposed. The concept of multiresolution is explained as existing independently of the wavelet transform. The wavelet transform is extended to two and three dimensions to allow image and video processing. For still image segmentation the Resolution Mosaic Expectation Maximization (RM-EM) algorithm is proposed. The resolution mosaic enables the algorithm to employ the spatial correlation between the pixels. The level of the local resolution depends on the information content of the individual parts of the image. The use of various resolutions speeds up the processing and improves the results. New algorithms based on the 3D wavelet transform and the 3D wavelet packet analysis are proposed for extracting moving objects from image sequences. The new algorithms have the advantage of considering the relevant spatial as well as temporal information of the movement. Because of the low computational complexity of the wavelet transform an FPGA hardware for the primary segmentation step was designed. Actual applications are used to investigate and evaluate all algorithms: the segmentation of magnetic resonance images of the human brain and the detection of moving objects in image sequences of traffic scenes. The new algorithms show robustness against noise and changing ambient conditions and gave better segmentation results.
APA, Harvard, Vancouver, ISO, and other styles
13

Moya, Nikolas 1991. "Interactive segmentation of multiple 3D objects in medical images by optimum graph cuts = Segmentação interativa de múltiplos objetos 3D em imagens médicas por cortes ótimos em grafo." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275554.

Full text
Abstract:
Orientador: Alexandre Xavier Falcão
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-27T14:45:13Z (GMT). No. of bitstreams: 1 Moya_Nikolas_M.pdf: 5706960 bytes, checksum: 9304544bfe8a78039de8b62562531865 (MD5) Previous issue date: 2015
Resumo: Segmentação de imagens médicas é crucial para extrair medidas de objetos 3D (estruturas anatômicas) que são úteis no diagnóstico e tratamento de doenças. Nestas aplicações, segmentação interativa é necessária quando métodos automáticos falham ou não são factíveis. Métodos por corte em grafo são considerados o estado da arte em segmentação interativa, mas diversas abordagens utilizam o algoritmo min-cut/max-flow, que é limitado à segmentação binária, sendo que segmentação de múltiplos objetos pode economizar tempo e esforço do usuário. Este trabalho revisita a transformada imagem floresta diferencial (DIFT, em inglês) -- uma abordagem por corte em grafo adequada para segmentação de múltiplos objetos -- resolvendo problemas relacionados a ela. O algoritmo da DIFT executa em tempo proporcional ao número de voxels nas regiões modificadas em cada execução da segmentação (sublinear). Tal característica é altamente desejável em segmentação interativa de imagens 3D para responder as ações do usuário em tempo real. O algoritmo da DIFT funciona da seguinte forma: o usuário desenha marcadores (traço com voxels de semente) rotulados dentro de cada objeto e o fundo, enquanto o computador interpreta a imagem como um grafo, cujos nós são os voxels e os arcos são definidos por pixels vizinhos, produzindo como resultado uma floresta de caminhos ótimos (partição na imagem) enraizada nos nós sementes do grafo. Nesta floresta, cada objeto é representado pela floresta de caminhos ótimos enraizado em suas sementes internas. Tais árvores são pintadas com a mesmo cor associada ao rótulo do marcador correspondente. Ao adicionar ou remover marcadores, é possível corrigir a segmentação até o mapa de rótulo de objeto representar o resultado desejado. Para garantir consistência na segmentação, métodos baseados em semente sempre devem manter a conectividade entre os voxels e suas sementes. Entretanto, isto não é mantido em algumas abordagens, como Random Walkers ou quando o mapa de rótulos é filtrado para suavizar a fronteira dos objetos. Esta conectividade é primordial para realizar correções sem recomeçar o processo depois de cada intervenção do usuário. Todavia, foi observado que a DIFT falha em manter consistência da segmentação em alguns casos. Consertamos este problema tanto no algoritmo da DIFT, quanto após a suavização dos objetos. Estes resultados comparam diversas estruturas anatômicas 3D de imagens de ressonância magnética e tomografia computadorizada
Abstract: Medical image segmentation is crucial to extract measures from 3D objects (body anatomical structures) that are useful for diagnosis and treatment of diseases. In such applications, interactive segmentation is necessary whenever automated methods fail or are not feasible. Graph-cut methods are considered the state of the art in interactive segmentation, but most approaches rely on the min-cut/max-flow algorithm, which is limited to binary segmentation while multi-object segmentation can considerably save user time and effort. This work revisits the differential image foresting transform (DIFT) ¿ a graph-cut approach suitable for multi-object segmentation in linear time ¿ and solves several problems related to it. Indeed, the DIFT algorithm can take time proportional to the number of voxels in the regions modified at each segmentation execution (sublinear time). Such a characteristic is highly desirable in 3D interactive segmentation to respond the user's actions as close as possible to real time. Segmentation using the DIFT works as follows: the user draws labeled markers (strokes of connected seed voxels) inside each object and background, while the computer interprets the image as a graph, whose nodes are the voxels and arcs are defined by neighboring voxels, and outputs an optimum-path forest (image partition) rooted at the seed nodes in the graph. In the forest, each object is represented by the optimum-path trees rooted at its internal seeds. Such trees are painted with same color associated to the label of the corresponding marker. By adding/removing markers, the user can correct segmentation until the forest (its object label map) represents the desired result. For the sake of consistency in segmentation, similar seed-based methods should always maintain the connectivity between voxels and seeds that have labeled them. However, this does not hold in some approaches, such as random walkers, or when the segmentation is filtered to smooth object boundaries. That connectivity is also paramount to make corrections without starting over the process at each user intervention. However, we observed that the DIFT algorithm fails in maintaining segmentation consistency in some cases. We have fixed this problem in the DIFT algorithm and when the obtained object boundaries are smoothed. These results are presented and evaluated on several 3D body anatomical structures from MR and CT images
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
14

Muñoz, Pujol Xavier 1976. "Image segmentation integrating colour, texture and boundary information." Doctoral thesis, Universitat de Girona, 2003. http://hdl.handle.net/10803/7719.

Full text
Abstract:
La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura.
Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional.
La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen.
Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen.
La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.
Image segmentation is an important research area in computer vision and many segmentation methods have been proposed. However, elemental segmentation techniques based on boundary or region approaches often fail to produce accurate segmentation results. Hence, in the last few years, there has been a tendency towards the integration of both techniques in order to improve the results by taking into account the complementary nature of such information. This thesis proposes a solution to the image segmentation integrating region and boundary information. Moreover, the method is extended to texture and colour texture segmentation.
An exhaustive analysis of image segmentation techniques which integrate region and boundary information is carried out. Main strategies to perform the integration are identified and a classification of these approaches is proposed. Thus, the most relevant proposals are assorted and grouped in their corresponding approach. Moreover, characteristics of these strategies as well as the general lack of attention that is given to the texture is noted. The discussion of these aspects has been the origin of all the work evolved in this thesis, giving rise to two basic conclusions: first, the possibility of fusing several approaches to the integration of both information sources, and second, the necessity of a specific treatment for textured images.
Next, an unsupervised segmentation strategy which integrates region and boundary information and incorporates three different approaches identified in the previous review is proposed. Specifically, the proposed image segmentation method combines the guidance of seed placement, the control of decision criterion and the boundary refinement approaches. The method is composed by two basic stages: initialisation and segmentation. Thus, in the first stage, the main contours of the image are used to identify the different regions present in the image and to adequately place a seed for each one in order to statistically model the region. Then, the segmentation stage is performed based on the active region model which allows us to take region and boundary information into account in order to segment the whole image. Specifically, regions start to shrink and expand guided by the optimisation of an energy function that ensures homogeneity properties inside regions and the presence of real edges at boundaries. Furthermore, with the aim of imitating the Human Vision System when a person is slowly approaching to a distant object, a pyramidal structure is considered. Hence, the method has been designed on a pyramidal representation which allows us to refine the region boundaries from a coarse to a fine resolution, and ensuring noise robustness as well as computation efficiency.
The proposed segmentation strategy is then adapted to solve the problem of texture and colour texture segmentation. First, the proposed strategy is extended to texture segmentation which involves some considerations as the region modelling and the extraction of texture boundary information. Next, a method to integrate colour and textural properties is proposed, which is based on the use of texture descriptors and the estimation of colour behaviour by using non-parametric techniques of density estimation. Hence, the proposed strategy of segmentation is considered for the segmentation taking both colour and textural properties into account.
Finally, the proposal of image segmentation strategy is objectively evaluated and then compared with some other relevant algorithms corresponding to the different strategies of region and boundary integration. Moreover, an evaluation of the segmentation results obtained on colour texture segmentation is performed. Furthermore, results on a wide set of real images are shown and discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Cappabianco, Fabio Augusto Menocci. "Segmentação de tecidos do cerebro humano em imagens de ressonancia magnetica e sua avaliação." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275845.

Full text
Abstract:
Orientadores: Alexandre Xavier Falcão, Guido Costa Souza de Araujo
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-15T05:47:56Z (GMT). No. of bitstreams: 1 Cappabianco_FabioAugustoMenocci_D.pdf: 2671052 bytes, checksum: 751e1d22cedbe679c7440e3163af54d6 (MD5) Previous issue date: 2010
Resumo: A segmentação de tecidos cerebrais se tornou fundamental para a neurologia no tratamento e diagnose de pacientes. Muitas contribuições tem aprimorado as metodologias de segmentaçao mas, ainda ha muito a ser feito. De fato, ruídos provenientes da aquisiçao da imagem, a enorme quantidade de dados, variações anatômicas decorrentes de doenças, diferença de idade e sexo, alem de incisoes cirúrgicas sao alguns dos desafios enfrentados. Alem disso, e muito difícil gerar padroes ouro dos tecidos cerebrais contidos nas imagens de ressonancia magnetica e tambem escolher metricas apropriadas para avaliar uma determinada metodologia de segmentaçao de tecidos. Neste contexto, apresentamos uma revisao das operações de pre-processamento mais populares da literatura, bem como das diversas metodologias propostas para a segmentaçao de tecidos. Tambem apresentamos uma metodologia inovadora para a se gmentaçao dos tecidos de substancia branca, substancia cinzenta e líquido cerebro espinhal baseada no algoritmo de agrupamento de dados por floresta de caminhos otimos, com as seguintes características desejaveis: baixo tempo de processamento, robustez, alta acuracia, ajuste intuitivo de parametros, adaptabilidade a imagens de diferentes protocolos e a variaçoes anatomicas, e efetividade ao corrigir o efeito de heterogeneidade de campo magnetico. Avaliamos a metodologia quantitativamente e qualitativamente, comparando-a com dois metodos populares da literatura sobre cinco bases de dados de modalidades e anatomias diferentes. A avaliaçao quantitativa leva em conta o intervalo de operaçao das metodologias, e a avaliaçao qualitativa leva em conta o ponto de vista de especialistas com respeito a acuracia das segmentaçoes. Assim, acreditamos que a metodologia de segmentaçao de tecidos cerebrais agrega importantes contribuições ao estado da arte. Ja a metodologia de avaliaçao proposta evidencia a importancia da escolha de metricas apropriadas na analise de imagens medicas
Abstract: Segmentation of brain tissues from MR-images has become crucial to advance research, diagnosis and treatment in Neurology. Despite the large number of contributions, brain tissue segmentation is still a challenge, due to problems in image acquisition, large data sets, and anatomical variations caused by surgery, pathologies and differences in sex and age. Another difficulty is to create reliable ground truths for evaluation, which also requires suitable metrics. In this work, we review the most important pre-processing operations, as well as the most popular brain tissues segmentation methods. We also propose a new approach based on optimum-path forest clustering, which improves previous works on various aspects: speed, robustness, accuracy, intuitive tuning of parameters and adaptability to different imaging modalities and anatomies. The effectiveness of the approach can be noticed in both inhomogeneity correction and in white matter, gray matter and cerebral-spinal fluid segmentation. The method is evaluated quantitatively and qualitatively by taking into account two other popular methods, five datasets from diferent modalities, an operational range of parameters for each method and scores from distinct specialists. The results reveal a signiicant contribution to the state-of-the-art and emphasize the importance of suitable evaluation metrics in medical image analysis
Doutorado
Processamento e Analise de Imagens
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
16

Phellan, Aro Renzo 1989. "Medical image segmentation using statistical and fuzzy object shape models = Segmentação de imagens médicas usando modelos estatísticos e nebulosos da forma do objeto." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275546.

Full text
Abstract:
Orientador: Alexandre Xavier Falcão
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-27T03:30:14Z (GMT). No. of bitstreams: 1 PhellanAro_Renzo_M.pdf: 4734368 bytes, checksum: 27f258762d497b786df234144140e47a (MD5) Previous issue date: 2014
Resumo: A segmentação de imagens médicas consiste de duas tarefas fortemente acopladas: reconhecimento e delineamento. O reconhecimento indica a localização aproximada de um objeto, enquanto o delineamento define com precisão sua extensão espacial na imagem. O reconhecimento também verifica a corretude do delineamento do objeto. Os seres humanos são superiores aos computadores na tarefa de reconhecimento, enquanto o contrário acontece no delineamento. A segmentação manual, por exemplo, é geralmente passível de erro, tediosa, demorada e sujeita à variabilidade. Portanto, os métodos de segmentação interativa mais eficaces limitam a intervenção humana ao reconhecimento. No caso das imagens médicas, os objetos podem ser as estruturas anatômicas do corpo humano, como órgãos, sistemas e tumores. Sua segmentação é uma fase fundamental para obter medidas, como seus tamanhos e distâncias, para poder realizar sua análise quantitativa. A visualização de suas formas em 3D também é importante para sua análise qualitativa. Ambas análises podem ajudar os especialistas a estudar os fenómenos anatômicos e fisiológicos do corpo humano, diferenciar situações normais e anormais, diagnosticar doenças, estabelecer tratamentos, monitorar a evolução dos tumores e planejar procedimentos cirúrgicos. No entanto, um desafio crucial para a segmentação automática é obter um modelo matemático que possa substituir os seres humanos, capaz de reconhecer as estruturas anatômicas com base em suas características de textura e forma. Esta dissertação estuda duas aproximações importantes para este problema: os Modelos Estatísticos da Forma do Objeto (SOSMs) e os Modelos Nebulosos da Forma do Objeto (FOSMs). Os SOSMs são popularmente conhecidos como métodos de segmentação baseados em atlas e têm sido utilizados amplamente e com suceso em muitas aplicações. Porém, eles precisam do registro deformável das imagens --- um processo demorado que mapeia as imagens em um mesmo sistema de coordenadas (referência), que limita seu uso em estudos com grandes conjuntos de imagens. Os FOSMs são modelos mais recentes que podem ser significativamente mais eficientes que os SOSMs, mas precisam de métodos mais eficazes de reconhecimento e delineamento. Esta dissertação compara pela primeira vez os prós e contras dos SOSMs e FOSMs, utilizando conjuntos de imagens médicas de diferentes modalidades e estruturas anatômicas
Abstract: Image segmentation consists of two tightly coupled tasks: recognition and delineation. Recognition indicates the whereabouts of a desired object, while delineation precisely defines its spatial extent in the image. Recognition also verifies the correctness of the object's delineation. Humans are superior to computers in recognition and the other way around is valid for delineation. Manual segmentation, for instance, is usually considered error-prone, tedious, time-consuming, and subject to inter-observer variability. Therefore, the most effective interactive segmentation methods reduce human intervention to the recognition tasks. In medical images, objects may be body anatomical structures, such as organs, organ systems, and tumors. Their segmentation is a fundamental step to extract measures, such as sizes and distances for quantitative analysis. The visualization of their 3D shapes is also important for qualitative analysis. Both can help experts to study anatomical and physiological phenomena of the human body, differentiate between normal and abnormal, diagnose a disease, establish a treatment, monitor the evolution of a tumor, and plan a surgical procedure. However, a crucial challenge in automated segmentation is to obtain a surrogate mathematical model for humans, able to recognize the anatomy of such structures based on their texture and shape properties. This dissertation investigates two important approaches for this problem: the Statistical Object Shape Models (SOSMs) and the Fuzzy Object Shape Models (FOSMs). SOSMs are popularly known as atlas-based segmentation methods and have been extensively and successfully used in many applications. However, they require deformable image registration --- a time-consuming operation to map images into a common (reference) coordinate system, which limits their use in studies with large image datasets. FOSMs are more recent and can be significantly more efficient than SOSMs, but they require more effective recognition and delineation methods. This dissertation compares for the first time the pros and cons of SOSMs and FOSMs, using image datasets from distinct medical imaging modalities and anatomical structures of the human body
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Xiangbo. "Knowledge-based image segmentation using deformable registration: application to brain MRI images." Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001121.pdf.

Full text
Abstract:
L'objectif de la thèse est de contribuer au recalage élastique d'images médicales intersujet-intramodalité, ainsi qu’à la segmentation d'images 3D IRM du cerveau dans le cas normal. L’algorithme des démons qui utilise les intensités des images pour le recalage est d’abord étudié. Une version améliorée est proposée en introduisant une nouvelle équation de calcul des forces pour résoudre des problèmes de recalages dans certaines régions difficiles. L'efficacité de la méthode est montrée sur plusieurs évaluations à partir de données simulées et réelles. Pour le recalage intersujet, une méthode originale de normalisation unifiant les informations spatiales et des intensités est proposée. Des contraintes topologiques sont introduites dans le modèle de déformation, visant à obtenir un recalage homéomorphique. La proposition est de corriger les points de déplacements ayant des déterminants jacobiens négatifs. Basée sur le recalage, une segmentation des structures internes est étudiée. Le principe est de construire une ontologie modélisant le connaissance a-priori de la forme des structures internes. Les formes sont représentées par une carte de distance unifiée calculée à partir de l'atlas de référence et celui déformé. Cette connaissance est injectée dans la mesure de similarité de la fonction de coût de l'algorithme. Un paramètre permet de balancer les contributions des mesures d'intensités et de formes. L'influence des différents paramètres de la méthode et des comparaisons avec d'autres méthodes de recalage ont été effectuées. De très bon résultats sont obtenus sur la segmentation des différentes structures internes du cerveau telles que les noyaux centraux et hippocampe
The research goal of this thesis is a contribution to the intra-modality inter-subject non-rigid medical image registration and the segmentation of 3D brain MRI images in normal case. The well-known Demons non-rigid algorithm is studied, where the image intensities are used as matching features. A new force computation equation is proposed to solve the mismatch problem in some regions. The efficiency is shown through numerous evaluations on simulated and real data. For intensity based inter-subject registration, normalizing the image intensities is important for satisfying the intensity correspondence requirements. A non-rigid registration method combining both intensity and spatial normalizations is proposed. Topology constraints are introduced in the deformable model to preserve an expected property in homeomorphic targets registration. The solution comes from the correction of displacement points with negative Jacobian determinants. Based on the registration, a segmentation method of the internal brain structures is studied. The basic principle is represented by ontology of prior shape knowledge of target internal structure. The shapes are represented by a unified distance map computed from the atlas and the deformed atlas, and then integrated into the similarity metric of the cost function. A balance parameter is used to adjust the contributions of the intensity and shape measures. The influence of different parameters of the method and comparisons with other registration methods were performed. Very good results are obtained on the segmentation of different internal structures of the brain such as central nuclei and hippocampus
APA, Harvard, Vancouver, ISO, and other styles
18

Amarante, André Ricardo Soares [UNESP]. "Método para caracterização da homogeneidade da distribuição das frações de áreas de materiais polifásicos por processamento digital de imagens." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151516.

Full text
Abstract:
Submitted by Andre Ricardo Soares Amarante null (andre.fatec@gmail.com) on 2017-08-31T16:27:41Z No. of bitstreams: 1 Final - André Ricardo Soares Amarante.pdf: 11499267 bytes, checksum: 3ec5f7b4ceda0ddb91bfe85611eed542 (MD5)
Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-09-01T14:21:42Z (GMT) No. of bitstreams: 1 amarante_ars_dr_guara.pdf: 11499267 bytes, checksum: 3ec5f7b4ceda0ddb91bfe85611eed542 (MD5)
Made available in DSpace on 2017-09-01T14:21:42Z (GMT). No. of bitstreams: 1 amarante_ars_dr_guara.pdf: 11499267 bytes, checksum: 3ec5f7b4ceda0ddb91bfe85611eed542 (MD5) Previous issue date: 2017-08-28
Outra
Sabe-se que a contribuição que o processamento de imagens digitais traz para a área da Engenharia de Materiais, mais especificamente na área de caracterização de materiais, é de extrema importância, pois a determinação manual de procedimentos que envolve esta área dispende de um tempo muito grande e geralmente é acompanhado de falhas de quem as realiza. A partir do exposto acima, o objetivo desta pesquisa é a proposição de um método, semiautomático, para caracterização da homogeneidade da distribuição das frações de áreas de materiais polifásicos por processamento digital de imagens, de maneira a: a) desenvolver um algoritmo, utilizando os recursos gráficos presentes no Java, para a identificação e segmentação de fases, utilizando recursos da Estatística e recursos visuais como histograma e gráficos de dispersão de dados; b) desenvolver um algoritmo para o processamento e a identificação da homogeneidade da distribuição das frações de áreas de materiais polifásicos; c) avaliar o método a partir dos dados obtidos nos resultados do experimento e d) descrever os métodos utilizados no plugin desenvolvido. Aplicar-se-á o conceito de Variabilidade, de maneira a permitir uma seleção das fases dos materiais analisados com uma maior precisão. Observa-se que, a partir do método proposta para a caracterização da homogeneidade da distribuição das frações de área de materiais polifásicos, o usuário terá a sua disposição dados que possam subsidiar suas decisões quando da determinação dos limites das fases definidas, assim, deixando de ser apenas um parâmetro baseado nas observações visuais (subjetivas) do mesmo e passando a ter dados que validem e comprovem as regiões determinadas.
It is known that the contribution that the digital image processing brings to the area of Materials Engineering, more specifically in the area of material characterization, is of extreme importance, since the manual determination of procedures involving this area takes a very long time large and is usually accompanied by failures of those who perform them. From the above, the objective of this research is the proposition of a semiautomatic method to characterize the homogeneity of the distribution of fractions of areas of polyphase materials by digital image processing, in order to: a) develop an algorithm, using the graphical resources present in Java, for the identification and segmentation of phases, using statistical resources and visual resources such as histogram and data scatter charts; b) to develop an algorithm for the processing and identification of the homogeneity of the distribution of fractions of areas of polyphase materials; c) evaluate the method from the data obtained in the experiment results and d) describe the methods used in the developed plugin. The concept of variability will be applied in order to allow a better selection of the phases of the analyzed materials. It is observed that, based on the proposed method for characterizing the homogeneity of the polyphase material area fractions distribution, the user will have at his disposal data that can subsidize his decisions when determining the limits of the defined phases, thus leaving be only a parameter based on the visual (subjective) observations of the same and starting to have data that validate and prove the determined regions.
APA, Harvard, Vancouver, ISO, and other styles
19

Silva, Maíra Saboia da. "Aglomeração de pixels pela transformada imagem floresta e sua aplicação em segmentação de fundo de imagens natuarais." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275713.

Full text
Abstract:
Orientador: Alexandre Xavier Falcão
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T04:43:31Z (GMT). No. of bitstreams: 1 Silva_MairaSaboiada_M.pdf: 1907857 bytes, checksum: 515dfcdf136f4e9cc1c1d8b0690b3116 (MD5) Previous issue date: 2011
Resumo: Esta dissertação apresenta uma metodologia automática para separar objetos de interesse em imagens naturais. Objetos de interesse são definidos como os maiores objetos que se destacam com relação aos pixels em torno deles dentro de uma imagem. Estes objetos não precisam necessariamente estar centrados, mas devem possuir o mínimo possível de pixels na região assumida como fundo da imagem (e.g., borda de imagem com uma dada espessura). A metodologia é baseada em segmentação de fundo e pode ser dividida em duas etapas. Primeiramente, um modelo nebuloso é criado para o fundo da imagem utilizando um método de agrupamento baseado em função densidade de probabilidade das cores de fundo. A partir do modelo é criado um mapa de pertinência, onde os pixels de objeto são mais claros do que os pixels de fundo. Foram investigadas técnicas de agrupamento baseadas em deslocamento médio, transformada imagem floresta, mistura de Gaussianas e maximização da esperança. Três métodos para criação do mapa de pertinência foram propostos e comparados; um inteiramente baseado na transformada imagem floresta, o outro em mistura de Gaussianas e o terceiro em maximização da esperança. Nos dois últimos casos, o agrupamento baseado na transformada imagem floresta foi utilizado como estimativa inicial dos grupos. Em seguida, o mapa de pertinência é utilizado para possibilitar a seleção de pixels sementes de objeto e fundo. Estes pixels geram um agrupamento binário da imagem colorida que separa o fundo do(s) objeto(s). Os experimentos foram realizados com uma base heterogênea composta por 50 imagens naturais. Os melhores resultados foram os obtidos pela metodologia inteiramente baseada na Transformada Imagem Floresta. Para justificar o uso de um agrupamento binário das cores para segmentação, os resultados foram comparados com uma limiarização ótima, aplicada ao mapa de pertinência. Esses testes foram realizados com o algoritmo de Otsu, mas o agrupamento binário apresentou melhores resultados. Também foi proposto um método híbrido de binarização do mapa de pertinência, envolvendo a limiarização de Otsu e a transformada imagem floresta. Neste caso, a limiarização de Otsu reduz o número de parâmetros em relação à primeira
Abstract: This work presents a new methodology for automatic extraction of desired objects in natural images. Objects of interest are defined as the largest components that differ from their surrounding pixels in a given image. These objects do not need to be centered, but they should contain a minimum number of pixels in the region assumed as background (e.g., an image border of certain thickness). This methodology is based on background segmentation and it can be summarized in two steps. First, a fuzzy model is created by a clustering method based on probability density function of the background colors. This model is a membership map, wherein object pixels are brighter than background pixels. For clustering, the following techniques were investigated: mean-shift, image foresting transform, Gaussian mixture model and expectation maximization. We then propose and compare three approaches to create a membership map; a first method entirely based on the image foresting transform, a second approach based on Gaussian mixture model and a third tecnique using expectation maximization. The clustering based on image foresting transform was adopted as the initial estimate for the clusters in the case of the two last methods. In a second step, the membership map is used to enable the selection of object and background seed pixels. These pixels create a binary clustering of the color pixels that separates background and object(s). The experiments involved a heterogeneous dataset with 50 natural images. The approach entirely based on the image foresting transform provided the best result. In order to justify the use of a binary clustering of color pixels instead of optimum thresholding on the membership map, we demonstrated that the binary clustering can provide a better result than Otsu's approach. It was also proposed a hybrid approach to binarize the membership map, which combines Otsu's thresholding and image foresting transform. In this case, Otsu's thresholding reduces the number of parameters in regard to the first approach
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
20

Chowdhury, Md Mahbubul Islam. "Image segmentation for coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/MQ55494.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Jingdong. "Graph based image segmentation /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20WANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Linnett, L. M. "Multi-texture image segmentation." Thesis, Heriot-Watt University, 1991. http://hdl.handle.net/10399/856.

Full text
Abstract:
Visual perception of images is closely related to the recognition of the different texture areas within an image. Identifying the boundaries of these regions is an important step in image analysis and image understanding. This thesis presents supervised and unsupervised methods which allow an efficient segmentation of the texture regions within multi-texture images. The features used by the methods are based on a measure of the fractal dimension of surfaces in several directions, which allows the transformation of the image into a set of feature images, however no direct measurement of the fractal dimension is made. Using this set of features, supervised and unsupervised, statistical processing schemes are presented which produce low classification error rates. Natural texture images are examined with particular application to the analysis of sonar images of the seabed. A number of processes based on fractal models for texture synthesis are also presented. These are used to produce realistic images of natural textures, again with particular reference to sonar images of the seabed, and which show the importance of phase and directionality in our perception of texture. A further extension is shown to give possible uses for image coding and object identification.
APA, Harvard, Vancouver, ISO, and other styles
23

Vyas, Aseem. "Medical Image Segmentation by Transferring Ground Truth Segmentation." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32431.

Full text
Abstract:
The segmentation of medical images is a difficult task due to the inhomogeneous intensity variations that occurs during digital image acquisition, the complicated shape of the object, and the medical expert’s lack of semantic knowledge. Automated segmentation algorithms work well for some medical images, but no algorithm has been general enough to work for all medical images. In practice, most of the time the segmentation results are corrected by the experts before the actual use. In this work, we are motivated to determine how to make use of manually segmented data in automatic segmentation. The key idea is to transfer the ground truth segmentation from the database of train images to a given test image. The ground truth segmentation of MR images is done by experts. The process includes a hierarchical image decomposition approach that performs the shape matching of test images at several levels, starting with the image as a whole (i.e. level 0) and then going through a pyramid decomposition (i.e. level 1, level 2, etc.) with the database of the train images and the given test image. The goal of pyramid decomposition is to find the section of the training image that best matches a section of the test image of a different level. After that, a re-composition approach is taken to place the best matched sections of the training image to the original test image space. Finally, the ground truth segmentation is transferred from the best training images to their corresponding location in the test image. We have tested our method on a hip joint MR image database and the experiment shows successful results on level 0, level 1 and level 2 re-compositions. Results improve with deeper level decompositions, which supports our hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
24

Ghose, Soumya. "Robust image segmentation applied to magnetic resonance and ultrasound images of the prostate." Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/98524.

Full text
Abstract:
Prostate segmentation in trans rectal ultrasound (TRUS) and magnetic resonance images (MRI) facilitates volume estimation, multi-modal image registration, surgical planing and image guided prostate biopsies. The objective of this thesis is to develop computationally efficient prostate segmentation algorithms in both TRUS and MRI image modalities. In this thesis we propose a probabilistic learning approach to achieve a soft classification of the prostate for automatic initialization and evolution of a deformable model for prostate segmentation. Two deformable models are developed for the TRUS segmentation. An explicit shape and region prior based deformable model and an implicit deformable model guided by an energy minimization framework. Besides, in MRI, the posterior probabilities are fused with the soft segmentation coming from an atlas segmentation and a graph cut based energy minimization achieves the final segmentation. In both image modalities, statistically significant improvement are achieved compared to current works in the literature.
La segmentació de la pròstata en imatge d'ultrasò (US) i de ressonància magnètica (MRI) permet l'estimació del volum, el registre multi-modal i la planificació quirúrgica de biòpsies guiades per imatge. L'objectiu d'aquesta tesi és el desenvolupament d'algorismes automàtics per a la segmentació de la pròstata en aquestes modalitats. Es proposa un aprenentatge automàtic inical per obtenir una primera classificació de la pròstata que permet, a continuació, la inicialització i evolució de diferents models deformables. Per imatges d'US, es proposen un model explícit basat en forma i informació regional i un model implícit basat en la minimització d'una funció d'energia. En MRI, les probalitats inicials es fusionen amb una imatge de probabilitat provinent d'una segmentació basada en atlas, i la minimització es realitza mitjançant tècniques de grafs. El resultat final és una significant millora dels algorismes actuals en ambdues modalitats d'imatge.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Ningning. "Inverse problems in medical ultrasound images - applications to image deconvolution, segmentation and super-resolution." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/16613/1/Zhao.pdf.

Full text
Abstract:
In the field of medical image analysis, ultrasound is a core imaging modality employed due to its real time and easy-to-use nature, its non-ionizing and low cost characteristics. Ultrasound imaging is used in numerous clinical applications, such as fetus monitoring, diagnosis of cardiac diseases, flow estimation, etc. Classical applications in ultrasound imaging involve tissue characterization, tissue motion estimation or image quality enhancement (contrast, resolution, signal to noise ratio). However, one of the major problems with ultrasound images, is the presence of noise, having the form of a granular pattern, called speckle. The speckle noise in ultrasound images leads to the relative poor image qualities compared with other medical image modalities, which limits the applications of medical ultrasound imaging. In order to better understand and analyze ultrasound images, several device-based techniques have been developed during last 20 years. The object of this PhD thesis is to propose new image processing methods allowing us to improve ultrasound image quality using postprocessing techniques. First, we propose a Bayesian method for joint deconvolution and segmentation of ultrasound images based on their tight relationship. The problem is formulated as an inverse problem that is solved within a Bayesian framework. Due to the intractability of the posterior distribution associated with the proposed Bayesian model, we investigate a Markov chain Monte Carlo (MCMC) technique which generates samples distributed according to the posterior and use these samples to build estimators of the ultrasound image. In a second step, we propose a fast single image super-resolution framework using a new analytical solution to the l2-l2 problems (i.e., $\ell_2$-norm regularized quadratic problems), which is applicable for both medical ultrasound images and piecewise/ natural images. In a third step, blind deconvolution of ultrasound images is studied by considering the following two strategies: i) A Gaussian prior for the PSF is proposed in a Bayesian framework. ii) An alternating optimization method is explored for blind deconvolution of ultrasound.
APA, Harvard, Vancouver, ISO, and other styles
26

Sharma, Karan. "The Link Between Image Segmentation and Image Recognition." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/199.

Full text
Abstract:
A long standing debate in computer vision community concerns the link between segmentation and recognition. The question I am trying to answer here is, Does image segmentation as a preprocessing step help image recognition? In spite of a plethora of the literature to the contrary, some authors have suggested that recognition driven by high quality segmentation is the most promising approach in image recognition because the recognition system will see only the relevant features on the object and not see redundant features outside the object (Malisiewicz and Efros 2007; Rabinovich, Vedaldi, and Belongie 2007). This thesis explores the following question: If segmentation precedes recognition, and segments are directly fed to the recognition engine, will it help the recognition machinery? Another question I am trying to address in this thesis is of scalability of recognition systems. Any computer vision system, concept or an algorithm, without exception, if it is to stand the test of time, will have to address the issue of scalability.
APA, Harvard, Vancouver, ISO, and other styles
27

Casaca, Wallace Correa de Oliveira. "Graph Laplacian for spectral clustering and seeded image segmentation." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24062015-112215/.

Full text
Abstract:
Image segmentation is an essential tool to enhance the ability of computer systems to efficiently perform elementary cognitive tasks such as detection, recognition and tracking. In this thesis we concentrate on the investigation of two fundamental topics in the context of image segmentation: spectral clustering and seeded image segmentation. We introduce two new algorithms for those topics that, in summary, rely on Laplacian-based operators, spectral graph theory, and minimization of energy functionals. The effectiveness of both segmentation algorithms is verified by visually evaluating the resulting partitions against state-of-the-art methods as well as through a variety of quantitative measures typically employed as benchmark by the image segmentation community. Our spectral-based segmentation algorithm combines image decomposition, similarity metrics, and spectral graph theory into a concise and powerful framework. An image decomposition is performed to split the input image into texture and cartoon components. Then, an affinity graph is generated and weights are assigned to the edges of the graph according to a gradient-based inner-product function. From the eigenstructure of the affinity graph, the image is partitioned through the spectral cut of the underlying graph. Moreover, the image partitioning can be improved by changing the graph weights by sketching interactively. Visual and numerical evaluation were conducted against representative spectral-based segmentation techniques using boundary and partition quality measures in the well-known BSDS dataset. Unlike most existing seed-based methods that rely on complex mathematical formulations that typically do not guarantee unique solution for the segmentation problem while still being prone to be trapped in local minima, our segmentation approach is mathematically simple to formulate, easy-to-implement, and it guarantees to produce a unique solution. Moreover, the formulation holds an anisotropic behavior, that is, pixels sharing similar attributes are preserved closer to each other while big discontinuities are naturally imposed on the boundary between image regions, thus ensuring better fitting on object boundaries. We show that the proposed approach significantly outperforms competing techniques both quantitatively as well as qualitatively, using the classical GrabCut dataset from Microsoft as a benchmark. While most of this research concentrates on the particular problem of segmenting an image, we also develop two new techniques to address the problem of image inpainting and photo colorization. Both methods couple the developed segmentation tools with other computer vision approaches in order to operate properly.
Segmentar uma image é visto nos dias de hoje como uma prerrogativa para melhorar a capacidade de sistemas de computador para realizar tarefas complexas de natureza cognitiva tais como detecção de objetos, reconhecimento de padrões e monitoramento de alvos. Esta pesquisa de doutorado visa estudar dois temas de fundamental importância no contexto de segmentação de imagens: clusterização espectral e segmentação interativa de imagens. Foram propostos dois novos algoritmos de segmentação dentro das linhas supracitadas, os quais se baseiam em operadores do Laplaciano, teoria espectral de grafos e na minimização de funcionais de energia. A eficácia de ambos os algoritmos pode ser constatada através de avaliações visuais das segmentações originadas, como também através de medidas quantitativas computadas com base nos resultados obtidos por técnicas do estado-da-arte em segmentação de imagens. Nosso primeiro algoritmo de segmentação, o qual ´e baseado na teoria espectral de grafos, combina técnicas de decomposição de imagens e medidas de similaridade em grafos em uma única e robusta ferramenta computacional. Primeiramente, um método de decomposição de imagens é aplicado para dividir a imagem alvo em duas componentes: textura e cartoon. Em seguida, um grafo de afinidade é gerado e pesos são atribuídos às suas arestas de acordo com uma função escalar proveniente de um operador de produto interno. Com base no grafo de afinidade, a imagem é então subdividida por meio do processo de corte espectral. Além disso, o resultado da segmentação pode ser refinado de forma interativa, mudando-se, desta forma, os pesos do grafo base. Experimentos visuais e numéricos foram conduzidos tomando-se por base métodos representativos do estado-da-arte e a clássica base de dados BSDS a fim de averiguar a eficiência da metodologia proposta. Ao contrário de grande parte dos métodos existentes de segmentação interativa, os quais são modelados por formulações matemáticas complexas que normalmente não garantem solução única para o problema de segmentação, nossa segunda metodologia aqui proposta é matematicamente simples de ser interpretada, fácil de implementar e ainda garante unicidade de solução. Além disso, o método proposto possui um comportamento anisotrópico, ou seja, pixels semelhantes são preservados mais próximos uns dos outros enquanto descontinuidades bruscas são impostas entre regiões da imagem onde as bordas são mais salientes. Como no caso anterior, foram realizadas diversas avaliações qualitativas e quantitativas envolvendo nossa técnica e métodos do estado-da-arte, tomando-se como referência a base de dados GrabCut da Microsoft. Enquanto a maior parte desta pesquisa de doutorado concentra-se no problema específico de segmentar imagens, como conteúdo complementar de pesquisa foram propostas duas novas técnicas para tratar o problema de retoque digital e colorização de imagens.
APA, Harvard, Vancouver, ISO, and other styles
28

Lundström, Claes. "Segmentation of Medical Image Volumes." Thesis, Linköping University, Linköping University, Computer Vision, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54357.

Full text
Abstract:

Segmentation is a process that separates objects in an image. In medical images, particularly image volumes, the field of application is wide. For example 3D visualisations of the anatomy could benefit enormously from segmentation. The aim of this thesis is to construct a segmentation tool.

The project consist three main parts. First, a survey of the actual need of segmentation in medical image volumes was carried out. Then a unique three-step model for a segmentation tool was implemented, tested and evaluated.

The first step of the segmentation tool is a seed-growing method that uses the intensity and an orientation tensor estimate to decide which voxels that are part of the project. The second step uses an active contour, a deformable “balloon”. The contour is shrunk to fit the segmented border from the first step, yielding a surface suitable for visualisation. The last step consists of letting the contour reshape according to the orientation tensor estimate.

The use evaluation establishes the usefulness of the tool. The model is flexible and well adapted to the users’ requests. For unclear objects the segmentation may fail, but the cause is mostly poor image quality. Even though much work remains to be done on the second and third part of the tool, the results are most promising.

APA, Harvard, Vancouver, ISO, and other styles
29

Johnson, M. A. "Semantic segmentation and image search." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.605649.

Full text
Abstract:
Understanding the meaning behind visual data is increasingly important as the quantity of digital images in circulation explodes, and as computing in general and the Internet in specific shifts quickly towards an increasingly visual presentation of data. However, the remarkable amount of variance inside categories (e.g. different kinds of chairs) combined with the occurrence of similarity between categories (e.g. similar breeds of cats and dogs) makes this problem incredibly difficult to solve. In particular, the semantic segmentation of images into contiguous regions of similar interpretation combines the difficulties of object recognition and image segmentation to result in a problem of great complexity, yet great reward. This thesis proposes a novel solution to the problem of semantic segmentation, and explores its application to image search and retrieval. Our primary contribution is a new image information processing tool: the semantic texton forest. We use semantic texton forests to perform (i) semantic segmentation of images and (ii) image categorization, achieving state-of-the-art results for both on two challenging datasets. We then apply this to the problem of image search and retrieval, resulting in the Palette Search System. With Palette Search, the user is able to search for the first time using Query by Semantic Composition, in which he communicates both what he wants in the result image and where he wants it.
APA, Harvard, Vancouver, ISO, and other styles
30

Morgan, Pamela Sheila. "Medical image coding and segmentation :." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tweed, David S. "Motion segmentation across image sequences." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Felhi, Mehdi. "Document image segmentation : content categorization." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0109/document.

Full text
Abstract:
Dans cette thèse, nous abordons le problème de la segmentation des images de documents en proposant de nouvelles approches pour la détection et la classification de leurs contenus. Dans un premier lieu, nous étudions le problème de l'estimation d'inclinaison des documents numérisées. Le but de ce travail étant de développer une approche automatique en mesure d'estimer l'angle d'inclinaison du texte dans les images de document. Notre méthode est basée sur la méthode Maximum Gradient Difference (MGD), la R-signature et la transformée de Ridgelets. Nous proposons ensuite une approche hybride pour la segmentation des documents. Nous décrivons notre descripteur de trait qui permet de détecter les composantes de texte en se basant sur la squeletisation. La méthode est appliquée pour la segmentation des images de documents numérisés (journaux et magazines) qui contiennent du texte, des lignes et des régions de photos. Le dernier volet de la thèse est consacré à la détection du texte dans les photos et posters. Pour cela, nous proposons un ensemble de descripteurs de texte basés sur les caractéristiques du trait. Notre approche commence par l'extraction et la sélection des candidats de caractères de texte. Deux méthodes ont été établies pour regrouper les caractères d'une même ligne de texte (mot ou phrase) ; l'une consiste à parcourir en profondeur un graphe, l'autre consiste à établir un critère de stabilité d'une région de texte. Enfin, les résultats sont affinés en classant les candidats de texte en régions « texte » et « non-texte » en utilisant une version à noyau du classifieur Support Vector Machine (K-SVM)
In this thesis I discuss the document image segmentation problem and I describe our new approaches for detecting and classifying document contents. First, I discuss our skew angle estimation approach. The aim of this approach is to develop an automatic approach able to estimate, with precision, the skew angle of text in document images. Our method is based on Maximum Gradient Difference (MGD) and R-signature. Then, I describe our second method based on Ridgelet transform.Our second contribution consists in a new hybrid page segmentation approach. I first describe our stroke-based descriptor that allows detecting text and line candidates using the skeleton of the binarized document image. Then, an active contour model is applied to segment the rest of the image into photo and background regions. Finally, text candidates are clustered using mean-shift analysis technique according to their corresponding sizes. The method is applied for segmenting scanned document images (newspapers and magazines) that contain text, lines and photo regions. Finally, I describe our stroke-based text extraction method. Our approach begins by extracting connected components and selecting text character candidates over the CIE LCH color space using the Histogram of Oriented Gradients (HOG) correlation coefficients in order to detect low contrasted regions. The text region candidates are clustered using two different approaches ; a depth first search approach over a graph, and a stable text line criterion. Finally, the resulted regions are refined by classifying the text line candidates into « text» and « non-text » regions using a Kernel Support Vector Machine K-SVM classifier
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Qian. "Segmentation-based Retinal Image Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18524.

Full text
Abstract:
Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used. Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques. Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively. Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.
APA, Harvard, Vancouver, ISO, and other styles
34

O'Connor, Kevin Luke. "Image segmentation through optimal tessellation." Thesis, Imperial College London, 1988. http://hdl.handle.net/10044/1/47210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

O'Donnell, Lauren (Lauren Jean) 1976. "Semi-automatic medical image segmentation." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/87175.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (leaves 92-96).
by Lauren O'Donnell.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
36

Spencer, Jack A. "Variational methods for image segmentation." Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3003758/.

Full text
Abstract:
The work in this thesis is concerned with variational methods for two-phase segmentation problems. We are interested in both the obtaining of numerical solutions to the partial differential equations arising from the minimisation of a given functional, and forming variational models that tackle some practical problem in segmentation (e.g. incorporating prior knowledge, dealing with intensity inhomogeneity). With that in mind we will discuss each aspect of the work as follows. A seminal two-phase variational segmentation problem in the literature is that of Active Contours Without Edges, introduced by Chan and Vese in 2001, based on the piecewise-constant formulation of Mumford and Shah. The idea is to partition an image into two regions of homogeneous intensity. However, despite the extensive success of this work its reliance on the level set method means that it is nonconvex. Later work on the convex reformulation of ACWE by Chan, Esedoglu, and Nikolova has led to a burgeoning of related methods, known as the convex relaxation approach. In Chapter 4, we introduce a method to find global minimisers of a general two-phase segmentation problem, which forms the basis for work in the rest of the thesis. We introduce an improved additive operator splitting (AOS) method based on the work of Weickert et al. and Tai et al. AOS has been frequently used for segmentation problems, but not in the convex relaxation setting. The adjustment made accounts for how to impose the relaxed binary constraint, fundamental to this approach. Our method is analogous to work such as Bresson et al. and we quantitatively compare our method against this by using a number of appropriate metrics. Having dealt with globally convex segmentation (GCS) for the general case in Chapter 4, we then bear in mind two important considerations. Firstly, we discuss the matter of selective segmentation and how it relates to GCS. Many recent models have incorporated user input for two-phase formulations using piecewise-constant fitting terms. In Chapter 5 we discuss the conditions for models of this type to be reformulated in a similar way. We then propose a new model compatible with convex relaxation methods, and present results for challenging examples. Secondly, we consider the incorporation of priors for GCS in Chapter 8. Here, the intention is to select objects in an image of a similar shape to a given prior. We consider the most appropriate way to represent shape priors in a variational formulation, and the potential applications of our approach. We also investigate the problem of segmentation where the observed data is challenging. We consider two cases in this thesis; in one there is significant intensity inhomogeneity, and in the other the image has been corrupted by unknown blur. The first has been widely studied and is closely related to the piecewise-smooth formulation of Mumford and Shah. In Chapter 6 we discuss a Variant Mumford- Shah Model by D.Chen et al. that uses the bias field framework. Our work focuses on improving results for methods of this type. The second has been less widely studied, but is more commonly considered when there is knowledge of the blur type. We discuss the advantages of simultaneously reconstructing and segmenting the image, rather than treating each problem separately and compare our method against comparable models. The aim of this thesis is to develop new variational methods for two-phase image segmentation, with potential applications in mind. We also consider new schemes to compute numerical solutions for generalised segmentation problems. With both approaches we focus on convex relaxation methods, and consider the challenges of formulating segmentation problems in this manner. Where possible we compare our ideas against current approaches to determine quantifiable improvements, particularly with respect to accuracy and reliability.
APA, Harvard, Vancouver, ISO, and other styles
37

Brown, Ryan Charles. "IRIS: Intelligent Roadway Image Segmentation." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49105.

Full text
Abstract:
The problem of roadway navigation and obstacle avoidance for unmanned ground vehicles has typically needed very expensive sensing to operate properly. To reduce the cost of sensing, it is proposed that an algorithm be developed that uses a single visual camera to image the roadway, determine where the lane of travel is in the image, and segment that lane. The algorithm would need to be as accurate as current lane finding algorithms as well as faster than a standard k- means segmentation across the entire image. This algorithm, named IRIS, was developed and tested on several sets of roadway images. The algorithm was tested for its accuracy and speed, and was found to be better than 86% accurate across all data sets for an optimal choice of algorithm parameters. IRIS was also found to be faster than a k-means segmentation across the entire image. IRIS was found to be adequate for fulfilling the design goals for the algorithm. IRIS is a feasible system for lane identification and segmentation, but it is not currently a viable system. More work to increase the speed of the algorithm and the accuracy of lane detection and to extend the inherent lane model to more complex road types is needed. IRIS represents a significant step forward in the single camera roadway perception field.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
38

Keshtkar, Abolfazl. "Swarm intelligence-based image segmentation." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27525.

Full text
Abstract:
One of the major difficulties met in image segmentation lies in the varying degrees of homogeneousness of the different regions in a given image. Hence, it is more efficient to adopt adaptive threshold type methodologies to identify the regions in the images. Throughout the last decade, many image processing tools and techniques have emerged based on the former technology which we called conventional and new technologies such as intelligent-based image processing techniques and algorithm. In some cases, a combination of both technologies is adapted to form a hybrid image processing technique. Intelligent-based techniques are increasing nowadays. Due to the rapid growth of agent-based technology's environments which are adopting numerous agent-based applications, tools, models and softwares to enhance and improve the quality of the agent based approach. In case of intelligent techniques to doing image processing; swarm intelligence techniques rarely have been used in term of image segmentation or boundary detection. However, there are many factors that make this task challenging. These factors include not only the limited such increasing number of agents in the environment, and the presence of techniques., but also how to efficiently find the right threshold in the image, develop a flexible design, and fully autonomous system that support different platform. A flexible architecture and tools need to be defined that overcomes these problems and permits a smooth and valuable image processing based on these new techniques in image processing. It would satisfy the needs of end users. This thesis illustrates the theoretical background, design, swarm based intelligent techniques and implementation of a fully agent-based model system that is called SIBIS (Swarm Intelligent Based Image Segmentation).
APA, Harvard, Vancouver, ISO, and other styles
39

Muller, Simon Adriaan. "Planar segmentation of range images." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80168.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Range images are images that store at each pixel the distance between the sensor and a particular point in the observed scene, instead of the colour information. They provide a convenient storage format for 3-D point cloud information captured from a single point of view. Range image segmentation is the process of grouping the pixels of a range image into regions of points that belong to the same surface. Segmentations are useful for many applications that require higherlevel information, and with range images they also represent a significant step towards complete scene reconstruction. This study considers the segmentation of range images into planar surfaces. It discusses the theory and also implements and evaluates some current approaches found in the literature. The study then develops a new approach based on the theory of graph cut optimization which has been successfully applied to various other image processing tasks but, according to a search of the literature, has otherwise not been used to attempt segmenting range images. This new approach is notable for its strong guarantees in optimizing a specific energy function which has a rigorous theoretical underpinning for handling noise in images. It proves to be very robust to noise and also different values of the few parameters that need to be trained. Results are evaluated in a quantitative manner using a standard evaluation framework and datasets that allow us to compare against various other approaches found in the literature. We find that our approach delivers results that are competitive when compared to the current state-of-the-art, and can easily be applied to images captured with different techniques that present varying noise and processing challenges.
AFRIKAANSE OPSOMMING: Dieptebeelde is beelde wat vir elke piksel die afstand tussen die sensor en ’n spesifieke punt in die waargenome toneel, in plaas van die kleur, stoor. Dit verskaf ’n gerieflike stoorformaat vir 3-D puntwolke wat vanaf ’n enkele sigpunt opgeneem is. Die segmentasie van dieptebeelde is die proses waarby die piksels van ’n dieptebeeld in gebiede opgedeel word, sodat punte saam gegroepeer word as hulle op dieselfde oppervlak lê. Segmentasie is nuttig vir verskeie toepassings wat hoërvlak inligting benodig en, in die geval van dieptebeelde, verteenwoordig dit ’n beduidende stap in die rigting van volledige toneel-rekonstruksie. Hierdie studie ondersoek segmentasie waar dieptebeelde opgedeel word in plat vlakke. Dit bespreek die teorie, en implementeer en evalueer ook sekere van die huidige tegnieke wat in die literatuur gevind kan word. Die studie ontwikkel dan ’n nuwe tegniek wat gebaseer is op die teorie van grafieksnit-optimering wat al suksesvol toegepas is op verskeie ander beeldverwerkingsprobleme maar, sover ’n studie op die literatuur wys, nog nie gebruik is om dieptebeelde te segmenteer nie. Hierdie nuwe benadering is merkbaar vir sy sterk waarborge vir die optimering van ’n spesifieke energie-funksie wat ’n sterk teoretiese fondasie het vir die hantering van geraas in beelde. Die tegniek bewys om fors te wees tot geraas sowel as die keuse van waardes vir die min parameters wat afgerig moet word. Resultate word geëvalueer op ’n kwantitatiewe wyse deur die gebruik van ’n standaard evalueringsraamwerk en datastelle wat ons toelaat om hierdie tegniek te vergelyk met ander tegnieke in die literatuur. Ons vind dat ons tegniek resultate lewer wat mededingend is ten opsigte van die huidige stand-van-die-kuns en dat ons dit maklik kan toepas op beelde wat deur verskeie tegnieke opgeneem is, alhoewel hulle verskillende geraastipes en verwerkingsuitdagings bied.
APA, Harvard, Vancouver, ISO, and other styles
40

Peixoto, Guilherme Garcia Schu. "Segmentação de imagens coloridas por árvores bayesianas adaptativas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/165108.

Full text
Abstract:
A segmentação de imagens consiste em urna tarefa de fundamental importância para diferentes aplicações em visão computacional, tais como por exemplo, o reconhecimento e o rastreamento de objetos, a segmentação de tomores/lesões em aplicações médicas, podendo também servir de auxílio em sistemas de reconhecimento facial. Embora exista uma extensa literatora abordando o problema de segmentação de imagens, tal tópico ainda continua em aberto para pesquisa. Particularmente, a tarefa de segmentar imagens coloridas é desafiadora devido as diversas inomogeneidades de cor, texturas e formas presentes nas feições descritivas das imagens. Este trabalho apresenta um novo método de clustering para abordar o problema da segmentação de imagens coloridas. Nós desenvolvemos uma abordagem Bayesiana para procura de máximos de densidade em urna distribuição discreta de dados, e representamos os dados de forma hierárquica originando clusters adaptativos a cada nível da hierarquia. Nós aplicamos o método de clustering proposto no problema de segmentação de imagens coloridas, aproveitando sua estrutura hierárquica, baseada em propriedades de árvores direcionadas, para representar hierarquicamente uma imagem colorida. Os experimentos realizados revelaram que o método de clustering proposto, aplicado ao problema de segmentação de imagens coloridas, obteve para a medida de performance Probabilistic Rand lndex (PRI) o valor de 0.8148 e para a medida Global Consistency Error (GCE) o valor 0.1701, superando um total de vinte e um métodos previamente propostos na literatura para o banco de dados BSD300. Comparações visuais confirmaram a competitividade da nossa abordagem em relação aos demais métodos testados. Estes resultados enfatizam a potencialidade do nosso método de clustering para abordar outras aplicações no domínio de Visão Computacional e Reconhecimento de Padrões.
Image segmentation is an essential task for several computer vision applications, such as object recognition, tracking and image retrieval. Although extensively studied in the literature, the problem of image segmentation remains an open topic of research. Particularly, the task of segmenting color images is challenging due to the inhomogeneities in the color regions encountered in natural scenes, often caused by the shapes of surfaces and their interactions with the illumination sources (e.g. causing shading and highlights) This work presents a novel non-supervised classification method. We develop a Bayesian framework for seeking modes on the underlying discrete distribution of data and we represent data hierarchically originating adaptive clusters at each levei of hierarchy. We apply the prnposal clustering technique for tackling the problem of color irnage segmentation, taking advantage of its hierarchical structure based on hierarchy properties of directed trees for representing fine to coarse leveis of details in an image. The experiments herein conducted revealed that the proposed clustering method applied to the color image segmentation problem, achieved for the Probabilistic Rand Index (PRI) performance measure the value of 0.8148 and for the Global Consistency Error (GCE) the value of 0.1701, outperforming twenty-three methods previously proposed in the literature for the BSD300 dataset. Visual comparison confirmed the competitiveness of our approach towards state-of-art methods publicly available in the literature. These results emphasize the great potential of our proposed clustering technique for tackling other applications in computer vision and pattem recognition.
APA, Harvard, Vancouver, ISO, and other styles
41

Elmowafy, Osama Mohammed Elsayed. "Image processing systems for TV image tracking." Thesis, University of Kent, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Toh, Vivian. "Statistical image analysis : length estimation and colour image segmentation." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Dongxiang. "Image segmentation and its application on MR image analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gundersen, Henrik Mogens, and Bjørn Fossan Rasmussen. "An Application of Image Processing Techniques for Enhancement and Segmentation of Bruises in Hyperspectral Images." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9594.

Full text
Abstract:

Hyperspectral images contain vast amounts of data which can provide crucial information to applications within a variety of scientific fields. Increasingly powerful computer hardware has made it possible to efficiently treat and process hyperspectral images. This thesis is interdisciplinary and focuses on applying known image processing algorithms to a new problem domain, involving bruises on human skin in hyperspectral images. Currently, no research regarding image detection of bruises on human skin have been uncovered. However, several articles have been written on hyperspectral bruise detection on fruits and vegetables. Ratio, difference and principal component analysis (PCA) were commonly applied enhancement algorithms within this field. The three algorithms, in addition to K-means clustering and the watershed segmentation algorithm, have been implemented and tested through a batch application developed in C# and MATLAB. The thesis seeks to determine if the enhancement algorithms can be applied to improve bruise visibility in hyperspectral images for visual inspection. In addition, it also seeks to answer if the enhancements provide a better segmentation basis. Known spectral characteristics form the experimentation basis in addition to identification through visual inspection. To this end, a series of experiments were conducted. The tested algorithms provided a better description of the bruises, the extent of the bruising, and the severity of damage. However, the algorithms tested are not considered robust for consistency of results. It is therefore recommended that the image acquisition setup is standardised for all future hyperspectral images. A larger, more varied data set would increase the statistical power of the results, and improve test conclusion validity. Results indicate that the ratio, difference, and principal component analysis (PCA) algorithms can enhance bruise visibility for visual analysis. However, images that contained weakly visible bruises did not show significant improvements in bruise visibility. Non-visible bruises were not made visible using the enhancement algorithms. Results from the enhancement algorithms were segmented and compared to segmentations of the original reflectance images. The enhancement algorithms provided results that gave more accurate bruise regions using K-means clustering and the watershed segmentation. Both segmentation algorithms gave the overall best results using principal components as input. Watershed provided less accurate segmentations of the input from the difference and ratio algorithms.

APA, Harvard, Vancouver, ISO, and other styles
45

Viall, Sarah F. "The feasibility of conducting manual image segmentation of 3D sonographic images of axillary lymph nodes." Connect to resource, 2009. http://hdl.handle.net/1811/36945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yin, Yin. "Multi-surface, multi-object optimal image segmentation: application in 3D knee joint imaged by MR." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/767.

Full text
Abstract:
A novel method called LOGISMOS - Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces - for simultaneous segmentation of multiple interacting surfaces belonging to multiple interacting objects is reported. The approach is based on representation of the multiple inter-relationships in a single n-dimensional graph, followed by graph optimization that yields a globally optimal solution. Three major contributions for LOGISMOS are made and illustrated in this thesis: 1) multi-object multi-surface optimal surface detection graph design, 2) implementation of a novel and reliable cross-object surface mapping technique and 3) pattern recognition-based graph cost design. The LOGISMOS method's utility and performance are demonstrated on a knee joint bone and cartilage segmentation task. Although trained on only a small number of nine example images, this system achieved good performance as judged by Dice Similarity Coefficients (DSC) using a leave-one-out test, with DSC values of 0.84+-0.04, 0.80+-0.04 and 0.80+-0.04 for the femoral, tibial, and patellar cartilage regions, respectively. These are excellent values of DSC considering the narrow-sheet character of the cartilage regions. Similarly, very low signed mean cartilage thickness errors were observed when compared to manually-traced independent standard in 60 randomly selected 3D MR image datasets from the Osteoarthritis Initiative database - 0.11+-0.24, 0.05+-0.23, and 0.03+-0.17 mm for the femoral, tibial, and patellar cartilage thickness, respectively. The average signed surface positioning error for the 6 detected surfaces ranged from 0.04+-0.12 mm to 0.16+-0.22 mm, while the unsigned surface positioning error ranged from 0.22+-0.07 mm to 0.53+-0.14 mm. The reported LOGISMOS framework provides robust and accurate segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object multi-surface segmentation problems. Following the LOGISMOS-based cartilage segmentation, a fully automated meniscus segmentation system was build using pattern recognition technique. The leave-one-out test for the nine training images showed very good mean DSC 0.80+-0.04. The signed and unsigned surface positioning error when compared to manually-traced independent standard in the 60 randomly selected 3D MR image datasets is 0.65+-0.20 and 0.68+-0.20 mm respectively.
APA, Harvard, Vancouver, ISO, and other styles
47

Freitas, Claudio Cesar Silva de 1989. "Um estudo do reconhecimento de linhas palmares utilizando PCA e limiarização local adaptativa." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260048.

Full text
Abstract:
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T04:20:13Z (GMT). No. of bitstreams: 1 Freitas_ClaudioCesarSilvade_M.pdf: 6956740 bytes, checksum: a8ec8ed0154572009885ad4f604698e0 (MD5) Previous issue date: 2014
Resumo: Está cada vez mais claro como a tecnologia biométrica tem se tornado mais presente no cotidiano das pessoas e tema de interesse de grupos de pesquisa ao redor do mundo. Isso é refletido pela grande quantidade de trabalhos existentes na área e muitos investimentos comerciais. Tecnologias biométricas são basicamente sistemas com capacidade de identificar e verificar a identidade de um indivíduo por meio de uma característica física ou comportamental. Esse trabalho propõe um estudo sobre o reconhecimento das linhas palmares que utiliza a análise de componentes principais como método de reconhecimento. A motivação para esse estudo está na importância de melhorar os métodos existentes de biometria, visto que ainda não existe uma técnica livre de erros ou falsificações. Este estudo é importante pois irá apresentar a aplicação do PCA para a detecção das linhas palmares utilizando uma técnica simples de limiarização adaptativa para extrair as informações biométricas da imagem palmar. Os resultados dessa pesquisa mostraram que o PCA apresentou um desempenho superior quando utilizamos a limiarização adaptativa para a extração das linhas principais da palma da mão. Conclui-se que essa modalidade biométrica apresenta um bom potencial para ser utilizada como medida de identificação e verificação de um usuário. Contudo, é necessário que sejam utilizados os algoritmos de processamento adequados, assim como, deve-se levar em consideração a qualidade e resolução da imagem, o tipo de processamento e o custo computacional necessário
Abstract: It is easy to identify how biometric technology has become more present in daily life as it has become the subject of interest from research groups around the world. This reality is a result of a large amount of existing work in the area and many commercial investments. Biometric technologies are basically systems developed in order to identify and verify the identity of an individual through a physical or behavioral characteristic. This work proposes a study on palmprint recognition using PCA and local adaptive thresholding. The motivation for this study is the importance of improving existing methods of biometric systems, since there is no technique completely safe against fails or steals. This is a simple technique used in order to facilitate the development of a palmprint recognition system using simple methods to be applied in different systems, such as embedded systems. The results of this research showed that the PCA reached superior performance when using adaptive thresholding to extract the lines from the palmprint. We conclude that the biometric modality proposed in this study has a good potential to be used in identification and verification of a user. However, it is necessary to use the appropriate algorithm in image processing in order to extract as much information as possible. Additionally, it is necessary to consider the image resolution, and the hardware and computational cost involved in the method proposed
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
48

Diniz, Paula Rejane Beserra. "Segmentação de tecidos cerebrais usando entropia Q em imagens de ressonância magnética de pacientes com esclerose múltipla." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/17/17140/tde-11072008-124117/.

Full text
Abstract:
A perda volumétrica cerebral ou atrofia é um importante índice de destruição tecidual e pode ser usada para apoio ao diagnóstico e para quantificar a progressão de diversas doenças com componente degenerativo, como a esclerose múltipla (EM), por exemplo. Nesta doença ocorre perda tecidual regional, com reflexo no volume cerebral total. Assim, a presença e a progressão da atrofia podem ser usadas como um indexador da progressão da doença. A quantificação do volume cerebral é um procedimento relativamente simples, porém, quando feito manualmente é extremamente trabalhoso, consome grande tempo de trabalho e está sujeito a uma variação muito grande inter e intra-observador. Portanto, para a solução destes problemas há necessidade de um processo automatizado de segmentação do volume encefálico. Porém, o algoritmo computacional a ser utilizado deve ser preciso o suficiente para detectar pequenas diferenças e robusto para permitir medidas reprodutíveis a serem utilizadas em acompanhamentos evolutivos. Neste trabalho foi desenvolvido um algoritmo computacional baseado em Imagens de Ressonância Magnética para medir atrofia cerebral em controles saudáveis e em pacientes com EM, sendo que para a classificação dos tecidos foi utilizada a teoria da entropia generalizada de Tsallis. Foram utilizadas para análise exames de ressonância magnética de 43 pacientes e 10 controles saudáveis pareados quanto ao sexo e idade para validação do algoritmo. Os valores encontrados para o índice entrópico q foram: para o líquido cerebrorraquidiano 0,2; para a substância branca 0,1 e para a substância cinzenta 1,5. Nos resultados da extração do tecido não cerebral, foi possível constatar, visualmente, uma boa segmentação, fato este que foi confirmado através dos valores de volume intracraniano total. Estes valores mostraram-se com variações insignificantes (p>=0,05) ao longo do tempo. Para a classificação dos tecidos encontramos erros de falsos negativos e de falsos positivos, respectivamente, para o líquido cerebrorraquidiano de 15% e 11%, para a substância branca 8% e 14%, e substância cinzenta de 8% e 12%. Com a utilização deste algoritmo foi possível detectar um perda anual para os pacientes de 0,98% o que está de acordo com a literatura. Desta forma, podemos concluir que a entropia de Tsallis acrescenta vantagens ao processo de segmentação de classes de tecido, o que não havia sido demonstrado anteriormente.
The loss of brain volume or atrophy is an important index of tissue destruction and it can be used to diagnosis and to quantify the progression of neurodegenerative diseases, such as multiple sclerosis. In this disease, the regional tissue loss occurs which reflects in the whole brain volume. Similarly, the presence and the progression of the atrophy can be used as an index of the disease progression. The objective of this work was to determine a statistical segmentation parameter for each single class of brain tissue using generalized Tsallis entropy. However, the computer algorithm used should be accurate and robust enough to detect small differences and allow reproducible measurements in following evaluations. In this work we tested a new method for tissue segmentation based on pixel intensity threshold. We compared the performance of this method using different q parameter range. We could find a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. The results support the conclusion that the differences in structural correlations and scale invariant similarities present in each single tissue class can be accessed by the generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. Were used for analysis of magnetic resonance imaging examinations of 43 patients and 10 healthy controls matched on the sex and age for validation of the algorithm. The values found for the entropic index q were: for the cerebrospinal fluid 0.2; for the white matter 0.1 and for gray matter 1.5. The results of the extraction of the tissue not brain can be seen, visually, a good target, which was confirmed by the values of total intracranial volume. These figures showed itself with variations insignificant (p >= 0.05) over time. For classification of the tissues find errors of false negatives and false positives, respectively, for cerebrospinal fluid of 15% and 11% for white matter 8% and 14%, and gray matter of 8% and 12%. With the use of this algorithm could detect an annual loss for the patients of 0.98% which is in line with the literature. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of target classes of tissue, which had not been demonstrated previously.
APA, Harvard, Vancouver, ISO, and other styles
49

Marcotegui, Beatriz. "Segmentation de séquences d'images en vue du codage." Phd thesis, École Nationale Supérieure des Mines de Paris, 1996. http://pastel.archives-ouvertes.fr/pastel-00002400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pichon, Eric. "Novel Methods for Multidimensional Image Segmentation." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7504.

Full text
Abstract:
Artificial vision is the problem of creating systems capable of processing visual information. A fundamental sub-problem of artificial vision is image segmentation, the problem of detecting a structure from a digital image. Examples of segmentation problems include the detection of a road from an aerial photograph or the determination of the boundaries of the brain's ventricles from medical imagery. The extraction of structures allows for subsequent higher-level cognitive tasks. One of them is shape comparison. For example, if the brain ventricles of a patient are segmented, can their shapes be used for diagnosis? That is to say, do the shapes of the extracted ventricles resemble more those of healthy patients or those of patients suffering from schizophrenia? This thesis deals with the problem of image segmentation and shape comparison in the mathematical framework of partial differential equations. The contribution of this thesis is threefold: 1. A technique for the segmentation of regions is proposed. A cost functional is defined for regions based on a non-parametric functional of the distribution of image intensities inside the region. This cost is constructed to favor regions that are homogeneous. Regions that are optimal with respect to that cost can be determined with limited user interaction. 2. The use of direction information is introduced for the segmentation of open curves and closed surfaces. A cost functional is defined for structures (curves or surfaces) by integrating a local, direction-dependent pattern detector along the structure. Optimal structures, corresponding to the best match with the pattern detector, can be determined using efficient algorithms. 3. A technique for shape comparison based on the Laplace equation is proposed. Given two surfaces, one-to-one correspondences are determined that allow for the characterization of local and global similarity measures. The local differences among shapes (resulting for example from a segmentation step) can be visualized for qualitative evaluation by a human expert. It can also be used for classifying shapes into, for example, normal and pathological classes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography