Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Lines detection and segmentation.

Dissertationen zum Thema „Lines detection and segmentation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Lines detection and segmentation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Li, Yaqian. „Image segmentation and stereo vision matching based on declivity line : application for vehicle detection“. Thesis, Rouen, INSA, 2010. http://www.theses.fr/2010ISAM0010.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre de systèmes d’aide à la conduite, nous avons contribué aux approches de stéréovision pour l’extraction de contour, la mise en correspondance des images stéréoscopiques et la détection de véhicules. L’extraction de contour réalisée est basée sur le concept declivity line que nous avons proposé. La declivity line est construite en liant des déclivités selon leur position relative et similarité d’intensité. L’extraction de contour est obtenue en filtrant les declivity lines construites basées sur leurs caractéristiques. Les résultats expérimentaux montrent que la declivity lines méthode extrait plus de l’informations utiles comparées à l’opérateur déclivité qui les a filtrées. Des points de contour sont ensuite mis en correspondance en utilisant la programmation dynamique et les caractéristiques de declivity lines pour réduire le nombre de faux appariements. Dans notre méthode de mise en correspondance, la declivity lines contribue à la reconstruction détaillée de la scène 3D. Finalement, la caractéristique symétrie des véhicules sont exploitées comme critère pour la détection de véhicule. Pour ce faire, nous étendons le concept de carte de symétrie monoculaire à la stéréovision. En conséquence, en effectuant la détection de véhicule sur la carte de disparité, une carte de symétrie (axe; largeur; disparity) est construite au lieu d’une carte de symétrie (axe; largeur). Dans notre concept, des obstacles sont examinés à différentes profondeurs pour éviter la perturbation de la scène complexe dont le concept monoculaire souffre
In the framework of driving assistance systems, we contributed to stereo vision approaches for edge extraction, matching of stereoscopic pair of images and vehicles detection. Edge extraction is performed based on the concept of declivity line we introduced. Declivity line is constructed by connecting declivities according to their relative position and intensity similarity. Edge extraction is obtained by filtering constructed declivity lines based on their characteristics. Experimental results show that declivity line method extracts additional useful information compared to declivity operator which filtered them out. Edge points of declivity lines are then matched using dynamic programming, and characteristics of declivity line reduce the number of false matching. In our matching method, declivity line contributes to detailed reconstruction of 3D scene. Finally, symmetrical characteristic of vehicles are exploited as a criterion for their detection. To do so, we extend the monocular concept of symmetry map to stereo concept. Consequently, by performing vehicle detection on disparity map, a (axis; width; disparity) symmetry map is constructed instead of an (axis; width) symmetry map. In our stereo concept, obstacles are examined at different depths thus avoiding disturbance of complex scene from which monocular concept suffers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bonakdar, Sakhi Omid. „Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis“. Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00912566.

Der volle Inhalt der Quelle
Annotation:
Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khairallah, Mahmoud. „Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.

Der volle Inhalt der Quelle
Annotation:
Plutôt que de générer des images de manière constante et synchrone, les capteurs neuromorphiques de vision -également connus sous le nom de caméras événementielles, permettent à chaque pixel de fournir des informations de manière indépendante et asynchrone chaque fois qu'un changement de luminosité est détecté. Par conséquent, les capteurs de vision neuromorphiques n'ont pas les problèmes des caméras conventionnelles telles que les artefacts d'image et le Flou cinétique. De plus, ils peuvent fournir une compression sans perte de donné avec une résolution temporelle et une plage dynamique plus élevée. Par conséquent, les caméras événmentielles remplacent commodément les caméras conventionelles dans les applications robotiques nécessitant une grande maniabilité et des conditions environnementales variables. Dans cette thèse, nous abordons le problème de l'odométrie visio-inertielle à l'aide de caméras événementielles et d'une centrale inertielle. En exploitant la cohérence des caméras événementielles avec les conditions de constance de la luminosité, nous discutons de la possibilité de construire un système d'odométrie visuelle basé sur l'estimation du flot optique. Nous développons notre approche basée sur l'hypothèse que ces caméras fournissent des informations des contours des objets de la scène et appliquons un algorithme de détection de ligne pour la réduction des données. Le suivi de ligne nous permet de gagner plus de temps pour les calculs et fournit une meilleure représentation de l'environnement que les points d'intérêt. Dans cette thèse, nous ne montrons pas seulement une approche pour l'odométrie visio-inertielle basée sur les événements, mais également des algorithmes qui peuvent être utilisés comme algorithmes des caméras événementielles autonomes ou intégrés dans d'autres approches si nécessaire
Rather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wigington, Curtis Michael. „End-to-End Full-Page Handwriting Recognition“. BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7099.

Der volle Inhalt der Quelle
Annotation:
Despite decades of research, offline handwriting recognition (HWR) of historical documents remains a challenging problem, which if solved could greatly improve the searchability of online cultural heritage archives. Historical documents are plagued with noise, degradation, ink bleed-through, overlapping strokes, variation in slope and slant of the writing, and inconsistent layouts. Often the documents in a collection have been written by thousands of authors, all of whom have significantly different writing styles. In order to better capture the variations in writing styles we introduce a novel data augmentation technique. This methods achieves state-of-the-art results on modern datasets written in English and French and a historical dataset written in German.HWR models are often limited by the accuracy of the preceding steps of text detection and segmentation.Motivated by this, we present a deep learning model that jointly learns text detection, segmentation, and recognition using mostly images without detection or segmentation annotations.Our Start, Follow, Read (SFR) model is composed of a Region Proposal Network to find the start position of handwriting lines, a novel line follower network that incrementally follows and preprocesses lines of (perhaps curved) handwriting into dewarped images, and a CNN-LSTM network to read the characters. SFR exceeds the performance of the winner of the ICDAR2017 handwriting recognition competition, even when not using the provided competition region annotations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Torr, Philip Hilaire Sean. „Motion segmentation and outlier detection“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Deng, Jingjing (Eddy). „Adaptive learning for segmentation and detection“. Thesis, Swansea University, 2017. https://cronfa.swan.ac.uk/Record/cronfa36297.

Der volle Inhalt der Quelle
Annotation:
Segmentation and detection are two fundamental problems in computer vision and medical image analysis, they are intrinsically interlinked by the nature of machine learning based classification, especially supervised learning methods. Many automatic segmentation methods have been proposed which heavily rely on hand-crafted discriminative features for specific geometry and powerful classifier for delinearating the foreground object and background region. The aimof this thesis is to investigate the adaptive schemes that can be used to derive efficient interactive segmentation methods for medical imaging applications, and adaptive detection methods for addressing generic computer vision problems. In this thesis, we consider adaptive learning as a progressive learning process that gradually builds the model given sequential supervision from user interactions. The learning process could be either adaptive re-training for smallscale models and datasets or adaptive fine-tuning for medium-large scale. In addition, adaptive learning is considered as a progressive learning process that gradually subdivides a big and difficult problem into a set of smaller but easier problems, where a final solution can be found via combining individual solvers consecutively. We first show that when discriminative features are readily available, the adaptive learning scheme can lead to an efficient interactive method for segmenting the coronary artery, where promising segmentation results can be achieved with limited user intervention. We then present a more general interactive segmentation method that integrates a CNN based cascade classifier and a parametric implicit shape representation. The features are self-learnt during the supervised training process, no hand-crafting is required. Then, the segmentation can be obtained via imposing a piecewise constant constraint to thedetection result through the proposed shape representation using region based deformation. Finally, we show the adaptive learning scheme can also be used to address the face detection problem in an unconstrained environment, where two CNN based cascade detectors are proposed. Qualitative and quantitative evaluations of proposed methods are reported, and show theefficiency of adaptive schemes for addressing segmentation and detection problems in general.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hastings, Joseph R. 1980. „Incremental Bayesian segmentation for intrusion detection“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28399.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (leaves 131-133).
This thesis describes an attempt to monitor patterns of system calls generated by a Unix host in order to detect potential intrusion attacks. Sequences of system calls generated by privileged processes are analyzed using incremental Bayesian segmentation in order to detect anomalous activity. Theoretical analysis of various aspects of the algorithm and empirical analysis of performance on synthetic data sets are used to tune the algorithm for use as an Intrusion Detection System.
by Joseph R. Hastings.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nedilko, Bohdan. „Seismic detection of rockfalls on railway lines“. Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58097.

Der volle Inhalt der Quelle
Annotation:
Railway operators mitigate the risk of derailments caused by hazardous rocks falling onto the track by installing slide detector fences (SDF). These consist of electrical sensing wires strung on poles located uphill of the track; falling rocks snap these wires and trigger an alarm. Rocks of non-threatening size and migrating animals frequently break the wires causing prolonged false alarms and delaying rail traffic until the SDF is manually repaired, often in a hazardous environment. This thesis is concerned with the development of a prototype of the autonomous Seismic Rockfall Detection System (SRFDS) as a potential replacement for the SDF. Analysis and classification of natural and anthropogenic seismic signals which have been observed at the SRFDS field installations, is presented. A method for identification of hazardous rocks (>0.028 m³) using an empirical peak ground velocity attenuation model is outlined. Pattern recognition techniques which are based on cross-correlation and on variations in the short-term / long term averages of the ground vibrations are introduced for rail traffic identification and rockfall detection. The techniques allow the SRFDS to eliminate false activations by rail traffic, report hazardous rocks with minimum (< 3 s) delay, and rearm automatically when a false alarm is revealed. Performance of the SRFDS field installations was modeled using continuous seismic data recorded at two locations where the SRFDS and the SDF operate in parallel. The SRFDS computer model detected all major rock slides; it was significantly less likely than the SDF to be triggered by animal migration, but may be susceptible to thermal noise in very specific situations. A comparison of the actual number of the train delays caused by the existing SDF with those of the SRFDS computer model, shows that the use of the SRFDS will reduce the average number of delayed trains. The actual reduction of the number of delayed trains is between 3 and 8 times, depending on the location. Train delays caused by false triggers induced by construction activities and track maintenance could still exist; however, they can be eliminated by the adoption of the appropriate track management procedures.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Torrent, Palomeras Albert. „Simultaneous detection and segmentation for generic objects“. Doctoral thesis, Universitat de Girona, 2013. http://hdl.handle.net/10803/117736.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the simultaneous detection and segmentation for generic objects in images. The proposed approach is based on building a dictionary of patches, which defines the object and allows the extraction of the detection and segmentation features used to train the classifier. Moreover, we include in the boosting training the ability of crossing information between detection and segmentation with the aim that good detections may help to better segment and vice versa. We adapt also the detection proposal to deal with specific problems of object recognition in medical and astronomical images. This point stresses one of the objectives of this thesis; proposing a generic approach able to deal with objects of a very different nature
En aquesta tesi s'estudia la detecció i segmentació simultània d'objectes genèrics en imatges. La proposta està basada en un diccionari de parts de l'objecte que el defineixen i, alhora, ens permet extreure les característiques de detecció i segmentació per entrenar el classificador. A més, dins l'entrenament del classificador s'inclou la possibilitat de creuar informació entre la detecció i la segmentació, de tal manera que una bona detecció pugui ajudar a segmentar i viceversa. L'algorisme s'ha validat adaptant-lo al reconeixement d'objectes en imatge mèdica i imatge astronòmica. Aquest punt reforça el principal objectiu de la tesi: proposar un sistema genèric capaç de tractar amb objectes de qualsevol tipus de naturalesa
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

HEGSTAM, BJÖRN. „Defect detection and segmentation inmultivariate image streams“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142069.

Der volle Inhalt der Quelle
Annotation:
OptoNova is a world leading producer of inspection systems for quality control of surfaces and edges at high rates. They develop their own sensor systems and software and have taken an interest in investigating the possibility of using methods from machine learning to make better use of the available sensor data. The purpose of this project was to develop a method for finding surface defects based on multivariate images. A previous Master’s project done at OptoNova had shown promising results when applying machine learning methods to inspect the sides of kitchen cabinet doors. The model developed for that project was based around using a Difference of Gaussians scale-space. That was used as a starting ground for the work presented here, with changes made in order to focus on texture defects on flat surfaces. The final model works by creating a Laplacian image pyramid from a source image. Each pyramid level is processed by a trained image model that, given a multivariate image, produces a greyscale image indicating defect areas. The outputs of all image models are scaled to the same size and averaged together. This gives the final probability map indicating what parts of the sample are defective. The image models consists of a feature extractor, extracting one feature per pixel, and a feature model, which in this project was a Gaussian mixture model. The model was built in a modular fashion, making it easy to use different features and feature models. Tests showed the pyramid model to perform better than the previous model. Defects characterised by noticeable differences in surface texture gave excellent results, while defects only indicated by slight changes in intensity of the normal texture were generally not found. It was concluded that the developed model shows potential, but more work needs to be done. More tests need to be run using larger data sets and samples with different texture types, such as wooden surfaces.
OptoNova är en världsledande leverantör av inspektionssystem for kvalitetskontroll av ytor och kanter i hög hastighet. Företaget utvecklar egna sensorsystem och mjukvara, och är intresserade av att undersöka möjligheten att bättre utnyttja tillgänglig sensordata genom att använda metoder baserade på maskininlärning. Syftet med det här projektet var att utveckla en metod för att upptäcka ytdefekter i multivariata bilder. Ett tidigare examensarbete gjort hos OptoNova visade på lovande resultat vid inspektion av kanter på köksluckor. Modellen som utvecklades i det projektet använde sig av ett Difference of Gaussians-skalrum. Den modellen användes som utgångspunkt för det här arbetet med vissa förändringar gjorda för att lägga fokus på texturdefekter i plana ytor. Den utvecklade modellen tar in en multivariat bild och genererar en Laplacepyramid. Varje nivå i pyramiden skickas sedan igenom en tränad bildmodell som i sin tur producerar en gråskalebild där möjliga defekter är markerade. Samtliga bildmodellers resultat skalas upp till samma storlek som ursprungsbilden och en medelvärdesbild beräknas. Detta ger den slutliga defektbilden som visar vilka delar av det inlästa provet som är defekta. Varje bildmodell består dels av en modul som extraherar särdragsvektorer och dels av en modul som modellerar hur vektorer från oskadade ytor är fördelade i rummet av särdragsvektorer. För det senare användes en Gaussian mixture model (GMM). Modellens modullära design gör det enkelt att använda olika typer av särdragsvektorer och modeller för dessa. Tester visade att pyramidmodellen kan prestera bättre än den tidigare utvecklade modellen. Utmärkta resultat uppnåddes vid detektion av defekter som karaktäriserades av tydliga avvikelser i textur. Defekter som däremot endast utgjordes av mindre variationer i intensitet hittades generellt sett inte. Det konstaterades att den nya modellen visar på potential till att fungera väl, men att mer arbete fortfarande behöver göras. Framförallt måste fler tester göras med fler prover, samt prover med varierande ytmönster, såsom träytor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Beare, Richard. „Image segmentation based on local motion detection /“. Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phb3684.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Hegstam, Björn. „Defect detection and segmentation inmultivariate image streams“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150456.

Der volle Inhalt der Quelle
Annotation:
OptoNova is a world leading producer of inspection systems for qualitycontrol of surfaces and edges at high rates. They develop their ownsensor systems and software and have taken an interest in investigatingthe possibility of using methods from machine learning to make betteruse of the available sensor data.The purpose of this project was to develop a method for finding sur-face defects based on multivariate images. A previous Master’s projectdone at OptoNova had shown promising results when applying machinelearning methods to inspect the sides of kitchen cabinet doors. Themodel developed for that project was based around using a Differenceof Gaussians scale-space. That was used as a starting ground for thework presented here, with changes made in order to focus on texturedefects on flat surfaces.The final model works by creating a Laplacian image pyramid from asource image. Each pyramid level is processed by a trained image modelthat, given a multivariate image, produces a greyscale image indicatingdefect areas. The outputs of all image models are scaled to the same sizeand averaged together. This gives the final probability map indicatingwhat parts of the sample are defective. The image models consists of afeature extractor, extracting one feature per pixel, and a feature model,which in this project was a Gaussian mixture model. The model wasbuilt in a modular fashion, making it easy to use different features andfeature models.Tests showed the pyramid model to perform better than the previousmodel. Defects characterised by noticeable differences in surface texturegave excellent results, while defects only indicated by slight changes inintensity of the normal texture were generally not found.It was concluded that the developed model shows potential, butmore work needs to be done. More tests need to be run using larger datasets and samples with different texture types, such as wooden surfaces.
OptoNova är en världsledande leverantör av inspektionssystem for kvali-tetskontroll av ytor och kanter i hög hastighet. Företaget utvecklar egnasensorsystem och mjukvara, och är intresserade av att undersöka möj-ligheten att bättre utnyttja tillgänglig sensordata genom att användametoder baserade på maskininlärning.Syftet med det här projektet var att utveckla en metod för att upp-täcka ytdefekter i multivariata bilder. Ett tidigare examensarbete gjorthos OptoNova visade på lovande resultat vid inspektion av kanter påköksluckor. Modellen som utvecklades i det projektet använde sig av ettDifference of Gaussians-skalrum. Den modellen användes som utgångs-punkt för det här arbetet med vissa förändringar gjorda för att läggafokus på texturdefekter i plana ytor.Den utvecklade modellen tar in en multivariat bild och genereraren Laplacepyramid. Varje nivå i pyramiden skickas sedan igenom entränad bildmodell som i sin tur producerar en gråskalebild där möjligadefekter är markerade. Samtliga bildmodellers resultat skalas upp tillsamma storlek som ursprungsbilden och en medelvärdesbild beräknas.Detta ger den slutliga defektbilden som visar vilka delar av det inlästaprovet som är defekta. Varje bildmodell består dels av en modul somextraherar särdragsvektorer och dels av en modul som modellerar hurvektorer från oskadade ytor är fördelade i rummet av särdragsvektorer.För det senare användes en Gaussian mixture model (GMM). Modellensmodullära design gör det enkelt att använda olika typer av särdragsvek-torer och modeller för dessa.Tester visade att pyramidmodellen kan prestera bättre än den tidi-gare utvecklade modellen. Utmärkta resultat uppnåddes vid detektionav defekter som karaktäriserades av tydliga avvikelser i textur. Defektersom däremot endast utgjordes av mindre variationer i intensitet hittadesgenerellt sett inte.Det konstaterades att den nya modellen visar på potential till attfungera väl, men att mer arbete fortfarande behöver göras. Framföralltmåste fler tester göras med fler prover, samt prover med varierandeytmönster, såsom träytor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Eltayef, Khalid Ahmad A. „Segmentation and lesion detection in dermoscopic images“. Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/16211.

Der volle Inhalt der Quelle
Annotation:
Malignant melanoma is one of the most fatal forms of skin cancer. It has also become increasingly common, especially among white-skinned people exposed to the sun. Early detection of melanoma is essential to raise survival rates, since its detection at an early stage can be helpful and curable. Working out the dermoscopic clinical features (pigment network and lesion borders) of melanoma is a vital step for dermatologists, who require an accurate method of reaching the correct clinical diagnosis, and ensure the right area receives the correct treatment. These structures are considered one of the main keys that refer to melanoma or non-melanoma disease. However, determining these clinical features can be a time-consuming, subjective (even for trained clinicians) and challenging task for several reasons: lesions vary considerably in size and colour, low contrast between an affected area and the surrounding healthy skin, especially in early stages, and the presence of several elements such as hair, reflections, oils and air bubbles on almost all images. This thesis aims to provide an accurate, robust and reliable automated dermoscopy image analysis technique, to facilitate the early detection of malignant melanoma disease. In particular, four innovative methods are proposed for region segmentation and classification, including two for pigmented region segmentation, one for pigment network detection, and one for lesion classification. In terms of boundary delineation, four pre-processing operations, including Gabor filter, image sharpening, Sobel filter and image inpainting methods are integrated in the segmentation approach to delete unwanted objects (noise), and enhance the appearance of the lesion boundaries in the image. The lesion border segmentation is performed using two alternative approaches. The Fuzzy C-means and the Markov Random Field approaches detect the lesion boundary by repeating the labeling of pixels in all clusters, as a first method. Whereas, the Particle Swarm Optimization with the Markov Random Field method achieves greater accuracy for the same aim by combining them in the second method to perform a local search and reassign all image pixels to its cluster properly. With respect to the pigment network detection, the aforementioned pre-processing method is applied, in order to remove most of the hair while keeping the image information and increase the visibility of the pigment network structures. Therefore, a Gabor filter with connected component analysis are used to detect the pigment network lines, before several features are extracted and fed to the Artificial Neural Network as a classifier algorithm. In the lesion classification approach, the K-means is applied to the segmented lesion to separate it into homogeneous clusters, where important features are extracted; then, an Artificial Neural Network with Radial Basis Functions is trained by representative features to classify the given lesion as melanoma or not. The strong experimental results of the lesion border segmentation methods including Fuzzy C-means with Markov Random Field and the combination between the Particle Swarm Optimization and Markov Random Field, achieved an average accuracy of 94.00% , 94.74% respectively. Whereas, the lesion classification stage by using extracted features form pigment network structures and segmented lesions achieved an average accuracy of 90.1% , 95.97% respectively. The results for the entire experiment were obtained using a public database PH2 comprising 200 images. The results were then compared with existing methods in the literature, which have demonstrated that our proposed approach is accurate, robust, and efficient in the segmentation of the lesion boundary, in addition to its classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Ström, Bartunek Josef. „FINGERPRINT IMAGE ENHANCEMENT, SEGMENTATION AND MINUTIAE DETECTION“. Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11149.

Der volle Inhalt der Quelle
Annotation:
Prior to 1960's, the fingerprint analysis was carried out manually by human experts and for forensic purposes only. Automated fingerprint identification systems (AFIS) have been developed during the last 50 years. The success of AFIS resulted in that its use expanded beyond forensic applications and became common also in civilian applications. Mobile phones and computers equipped with fingerprint sensing devices for fingerprint-based user identification are common today. Despite the intense development efforts, a major problem in automatic fingerprint identification is to acquire reliable matching features from fingerprint images with poor quality. Images where the fingerprint pattern is heavily degraded usually inhibit the performance of an AFIS system. The performance of AFIS systems is also reduced when matching fingerprints of individuals with large age variations. This doctoral thesis presents contributions within the field of fingerprint image enhancement, segmentation and minutiae detection. The reliability of the extracted fingerprint features is highly dependent on the quality of the obtained fingerprints. Unfortunately, it is not always possible to have access to high quality fingerprints. Therefore, prior to the feature extraction, an enhancement of the quality of fingerprints and a segmentation are performed. The segmentation separates the fingerprint pattern from the background and thus limits possible sources of error due to, for instance, feature outliers. Most enhancement and segmentation techniques are data-driven and therefore based on certain features extracted from the low quality fingerprints at hand. Hence, different types of processing, such as directional filtering, are employed for the enhancement. This thesis contributes by proposing new research both for improving fingerprint matching and for the required pre-processing that improves the extraction of features to be used in fingerprint matching systems. In particular, the majority of enhancement and segmentation methods proposed herein are adaptive to the characteristics of each fingerprint image. Thus, the methods are insensitive towards sensor and fingerprint variability. Furthermore, introduction of the higher order statistics (kurtosis) for fingerprint segmentation is presented. Segmentation of the fingerprint image reduces the computational load by excluding background regions of the fingerprint image from being further processed. Also using a neural network to obtain a more robust minutiae detector with a patch rejection mechanism for speeding up the minutiae detection is presented in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhang, Jingdan McMillan Leonard. „Object detection and segmentation using discriminative learning“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2181.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Jun. 26, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Wang, Qiong. „Salient object detection and segmentation in videos“. Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0003/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse est centrée sur le problème de la détection d'objets saillants et de leur segmentation dans une vidéo en vue de détecter les objets les plus attractifs ou d'affecter des identités cohérentes d'objets à chaque pixel d'une séquence vidéo. Concernant la détection d'objets saillants dans vidéo, outre une revue des techniques existantes, une nouvelle approche et l'extension d'un modèle sont proposées; de plus une approche est proposée pour la segmentation d'instances d'objets vidéo. Pour la détection d'objets saillants dans une vidéo, nous proposons : (1) une approche traditionnelle pour détecter l'objet saillant dans sa totalité à l'aide de la notion de "bordures virtuelles". Un filtre guidé est appliqué sur la sortie temporelle pour intégrer les informations de bord spatial en vue d'une meilleure détection des bords de l'objet saillants. Une carte globale de saillance spatio-temporelle est obtenue en combinant la carte de saillance spatiale et la carte de saillance temporelle en fonction de l'entropie. (2) Une revue des développements récents des méthodes basées sur l'apprentissage profond est réalisée. Elle inclut les classifications des méthodes de l'état de l'art et de leurs architectures, ainsi qu'une étude expérimentale comparative de leurs performances. (3) Une extension d'un modèle de l'approche traditionnelle proposée en intégrant un procédé de détection d'objet saillant d'image basé sur l'apprentissage profond a permis d'améliorer encore les performances. Pour la segmentation des instances d'objets dans une vidéo, nous proposons une approche d'apprentissage profond dans laquelle le calcul de la confiance de déformation détermine d'abord la confiance de la carte masquée, puis une sélection sémantique est optimisée pour améliorer la carte déformée, où l'objet est réidentifié à l'aide de l'étiquettes sémantique de l'objet cible. Les approches proposées ont été évaluées sur des jeux de données complexes et de grande taille disponibles publiquement et les résultats expérimentaux montrent que les approches proposées sont plus performantes que les méthodes de l'état de l'art
This thesis focuses on the problem of video salient object detection and video object instance segmentation which aim to detect the most attracting objects or assign consistent object IDs to each pixel in a video sequence. One approach, one overview and one extended model are proposed for video salient object detection, and one approach is proposed for video object instance segmentation. For video salient object detection, we propose: (1) one traditional approach to detect the whole salient object via the adjunction of virtual borders. A guided filter is applied on the temporal output to integrate the spatial edge information for a better detection of the salient object edges. A global spatio-temporal saliency map is obtained by combining the spatial saliency map and the temporal saliency map together according to the entropy. (2) An overview of recent developments for deep-learning based methods is provided. It includes the classifications of the state-of-the-art methods and their frameworks, and the experimental comparison of the performances of the state-of-the-art methods. (3) One extended model further improves the performance of the proposed traditional approach by integrating a deep-learning based image salient object detection method For video object instance segmentation, we propose a deep-learning approach in which the warping confidence computation firstly judges the confidence of the mask warped map, then a semantic selection is introduced to optimize the warped map, where the object is re-identified using the semantics labels of the target object. The proposed approaches have been assessed on the published large-scale and challenging datasets. The experimental results show that the proposed approaches outperform the state-of-the-art methods
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mallu, Mallu. „Fashion Object Detection and Pixel-Wise Semantic Segmentation : Crowdsourcing framework for image bounding box detection & Pixel-Wise Segmentation“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234691.

Der volle Inhalt der Quelle
Annotation:
Technology has revamped every aspect of our life, one of those various facets is fashion industry. Plenty of deep learning architectures are taking shape to augment fashion experiences for everyone. There are numerous possibilities of enhancing the fashion technology with deep learning. One of the key ideas is to generate fashion style and recommendation using artificial intelligence. Likewise, another significant feature is to gather reliable information of fashion trends, which includes analysis of existing fashion related images and data. When specifically dealing with images, localisation and segmentation are well known to address in-depth study relating to pixels, objects and labels present in the image. In this master thesis a complete framework is presented to perform localisation and segmentation on fashionista images. This work is a part of an interesting research work related to Fashion Style detection and Recommendation. Developed solution aims to leverage the possibility of localising fashion items in an image by drawing bounding boxes and labelling them. Along with that, it also provides pixel-wise semantic segmentation functionality which extracts fashion item label-pixel data. Collected data can serve as ground truth as well as training data for the aimed deep learning architecture. A study related to localisation and segmentation of videos has also been presented in this work. The developed system has been evaluated in terms of flexibility, output quality and reliability as compared to similar platforms. It has proven to be fully functional solution capable of providing essential localisation and segmentation services while keeping the core architecture simple and extensible.
Tekniken har förnyat alla aspekter av vårt liv, en av de olika fasetterna är modeindustrin. Massor av djupa inlärningsarkitekturer tar form för att öka modeupplevelser för alla. Det finns många möjligheter att förbättra modetekniken med djup inlärning. En av de viktigaste idéerna är att skapa modestil och rekommendation med hjälp av artificiell intelligens. På samma sätt är en annan viktig egenskap att samla pålitlig information om modetrender, vilket inkluderar analys av befintliga moderelaterade bilder och data. När det specifikt handlar om bilder är lokalisering och segmentering väl kända för att ta itu med en djupgående studie om pixlar, objekt och etiketter som finns i bilden. I denna masterprojekt presenteras en komplett ram för att utföra lokalisering och segmentering på fashionista bilder. Detta arbete är en del av ett intressant forskningsarbete relaterat till Fashion Style detektering och rekommendation. Utvecklad lösning syftar till att utnyttja möjligheten att lokalisera modeartiklar i en bild genom att rita avgränsande lådor och märka dem. Tillsammans med det tillhandahåller det även pixel-wise semantisk segmenteringsfunktionalitet som extraherar dataelementetikett-pixeldata. Samlad data kan fungera som grundsannelse samt träningsdata för den riktade djuplärarkitekturen. En studie relaterad till lokalisering och segmentering av videor har också presenterats i detta arbete. Det utvecklade systemet har utvärderats med avseende på flexibilitet, utskriftskvalitet och tillförlitlighet jämfört med liknande plattformar. Det har visat sig vara en fullt fungerande lösning som kan tillhandahålla viktiga lokaliseringsoch segmenteringstjänster samtidigt som kärnarkitekturen är enkel och utvidgbar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Gandhi, Tarak L. „Image sequence analysis for object detection and segmentation“. Adobe Acrobat reader required to view the full dissertation, 2000. http://www.etda.libraries.psu.edu/theses/approved/PSUonlyIndex/ETD-18/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Donnelley, Martin, und martin donnelley@gmail com. „Computer Aided Long-Bone Segmentation and Fracture Detection“. Flinders University. Engineering, 2008. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20080115.222927.

Der volle Inhalt der Quelle
Annotation:
Medical imaging has advanced at a tremendous rate since x-rays were discovered in 1895. Today, x-ray machines produce extremely high-quality images for radiologists to interpret. However, the methods of interpretation have only recently begun to be augmented by advances in computer technology. Computer aided diagnosis (CAD) systems that guide healthcare professionals to making the correct diagnosis are slowly becoming more prevalent throughout the medical field. Bone fractures are a relatively common occurrence. In most developed countries the number of fractures associated with age-related bone loss is increasing rapidly. Regardless of the treating physician's level of experience, accurate detection and evaluation of musculoskeletal trauma is often problematic. Each year, the presence of many fractures is missed during x-ray diagnosis. For a trauma patient, a mis-diagnosis can lead to ineffective patient management, increased dissatisfaction, and expensive litigation. As a result, detection of long-bone fractures is an important orthopaedic and radiologic problem, and it is proposed that a novel CAD system could help lower the miss rate. This thesis examines the development of such a system, for the detection of long-bone fractures. A number of image processing software algorithms useful for automating the fracture detection process have been created. The first algorithm is a non-linear scale-space smoothing technique that allows edge information to be extracted from the x-ray image. The degree of smoothing is controlled by the scale parameter, and allows the amount of image detail that should be retained to be adjusted for each stage of the analysis. The result is demonstrated to be superior to the Canny edge detection algorithm. The second utilises the edge information to determine a set of parameters that approximate the shaft of the long-bone. This is achieved using a modified Hough Transform, and specially designed peak and line endpoint detectors. The third stage uses the shaft approximation data to locate the bone centre-lines and then perform diaphysis segmentation to separate the diaphysis from the epiphyses. Two segmentation algorithms are presented and one is shown to not only produce better results, but also be suitable for application to all long-bone images. The final stage applies a gradient based fracture detection algorithm to the segmented regions. This algorithm utilises a tool called the gradient composite measure to identify abnormal regions, including fractures, within the image. These regions are then identified and highlighted if they are deemed to be part of a fracture. A database of fracture images from trauma patients was collected from the emergency department at the Flinders Medical Centre. From this complete set of images, a development set and test set were created. Experiments on the test set show that diaphysis segmentation and fracture detection are both performed with an accuracy of 83%. Therefore these tools can consistently identify the boundaries between the bone segments, and then accurately highlight midshaft long-bone fractures within the marked diaphysis. Two of the algorithms---the non-linear smoothing and Hough Transform---are relatively slow to compute. Methods of decreasing the diagnosis time were investigated, and a set of parallelised algorithms were designed. These algorithms significantly reduced the total calculation time, making use of the algorithm much more feasible. The thesis concludes with an outline of future research and proposed techniques that---along with the methods and results presented---will improve CAD systems for fracture detection, resulting in more accurate diagnosis of fractures, and a reduction of the fracture miss rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Fu, Guoyi. „Data driven low-level object detection and segmentation“. Thesis, University of Kent, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.498827.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bruce, Jacob Robert. „Mathematical Expression Detection and Segmentation in Document Images“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/46724.

Der volle Inhalt der Quelle
Annotation:
Various document layout analysis techniques are employed in order to enhance the accuracy of optical character recognition (OCR) in document images. Type-specific document layout analysis involves localizing and segmenting specific zones in an image so that they may be recognized by specialized OCR modules. Zones of interest include titles, headers/footers, paragraphs, images, mathematical expressions, chemical equations, musical notations, tables, circuit diagrams, among others. False positive/negative detections, oversegmentations, and undersegmentations made during the detection and segmentation stage will confuse a specialized OCR system and thus may result in garbled, incoherent output. In this work a mathematical expression detection and segmentation (MEDS) module is implemented and then thoroughly evaluated. The module is fully integrated with the open source OCR software, Tesseract, and is designed to function as a component of it. Evaluation is carried out on freely available public domain images so that future and existing techniques may be objectively compared.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Munnangi, Anirudh. „Innovative Segmentation Strategies for Melanoma Skin Cancer Detection“. University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1510916097483278.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Silva, Daniel Torres Couto Coimbra e. „LIDAR target detection and segmentation in road environment“. Master's thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/11774.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Engenharia Mecânica
In this project a comparative and exhaustive evaluation of several 2D laser data segmentation algorithms in road scenarios is performed. In a first stage, the segmentation algorithms are implemented using the ROS programming environment; the algorithms are applied to the raw laser scan data in order to extract groups of measurement points which share similar spatial properties and that probably will belong to one single object. Each algorithm has at least one threshold condition parameter that is configurable, and one of the goals is to try to determine the best value of that parameter for road environments. The following stage was the definition of the Ground-truth where multiple laser scans were hand-labelled. The next step was the comparison between the Ground-truth and the segmentation algorithms in order to test their validity. With the purpose of having a quantitative evaluation of the methods' performance, six performance measures were created and compared.
Neste trabalho é feita uma avaliação exaustiva e comparativa de diversos algoritmos de segmentação de dados laser 2D em ambiente de estrada. Numa primeira fase, os algoritmos de segmentação são implementados usando o ambiente ROS; estes algoritmos têm a função de juntar pontos adquiridos pelo laser e agrupar os pontos de acordo com as suas propriedades espaciais e que idealmente irão pertencer ao mesmo objeto. Tendo cada algoritmo pelo menos um parâmetro variável na sua condição de separação, um dos objetivos do projeto é determinar o seu valor optimo em varias situações de estrada. A etapa seguinte foi definir um Ground-truth: diversos laser scans foram manualmente segmentados. Por fim, é feita a comparação entre os resultados dos algoritmos com o Ground-truth, testando assim a validade de cada algoritmo. Com o intuito de se realizar uma avaliação quantitativa, foram criadas seis medidas de desempenho da segmentação dos algoritmos que penalizam casos de má segmentação.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Stainton, John Joseph. „Detection of signatures of selection in commercial chicken lines“. Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/21057.

Der volle Inhalt der Quelle
Annotation:
Within the last 100 years, commercial chickens have been split into two main groups. Broiler chickens are produced for meat production while layers are produced for egg production. This has caused large phenotypic changes and the genomic signatures of selection may be detectable using statistical techniques. Genomic regions identified by these techniques may include genes associated with production traits, and is therefore of interest to animal breeders. This thesis investigates signatures of selection in a number of commercial chicken lines using several statistical techniques based on population differentiation and levels of genetic diversity. First, signatures of selection were investigated using population differentiation in nine lines of broiler chickens. Weir and Cockerham's pairwise FST was calculated for genome-wide markers between the broiler lines and averaged into overlapping sliding windows to remove stochastic effects. A chromosome bound, circular permutation method was used to generate a null distribution and determine the significance of each window. A total of 51 putative selection signatures were found shared between lines and 87 putative selection signatures were found to be unique to one line. The majority of these regions contain peak positions for broiler QTL found in previous studies and eight regions were significantly enriched for broiler QTL. One region located on chromosome 27 contained 39 broiler QTL and 114 genes, several of which were functional candidates for association with broiler traits. Secondly, areas of low diversity were investigated in three different SNP datasets. All three datasets were taken from the same broiler line at different time points and consisted of different SNP densities, including 12k, 42k and 600k. A number of zero diversity regions were found in each dataset and several were shared between the datasets. The 600k dataset was also analysed using a regression test, which investigates the patterns of diversity as the distance from the selected site increases. This method searches for signatures of selections by fitting a regression to the diversity data to test the fit of the data to the theoretical model. A total of 15 regions were found displaying significant asymptotic regression and diversity values less than 0.005. One of these regions located on chromosome 1 was also found as a fixed region in the 12k and 42k datasets and contained the gene IGF1, which encodes an important protein for growth. Finally, signatures of selection were investigated between broiler and layer datasets by investigating population differentiation and diversity based analysis. Weir and Cockerham's pairwise FST was calculated between the two lines and outliers extracted. A total of 32 regions were found displaying high differentiation. Seven regions of low diversity in the layer dataset were also investigated. Several broiler and layer QTL had been previously identified in these regions. Two genes related to hedgehog proteins were identified within selected regions, which are known to be involved in embryogenesis. Finally seven regions were found to be highly differentiated between the broiler and layer lines, and the nine broiler lines in the first chapter. This may indicate selection which occurred during breed separation. Signatures of selection were identified in four broiler and layer datasets using several statistical techniques. A number of regions were identified in multiple datasets by a number of techniques and are therefore good candidate regions for selection. Other statistical techniques could be used in future studies to further confirm these regions and identify causative genes and variants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Pont, Tuset Jordi. „Image segmentation evaluation and its application to object detection“. Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/134354.

Der volle Inhalt der Quelle
Annotation:
The first parts of this Thesis are focused on the study of the supervised evaluation of image segmentation algorithms. Supervised in the sense that the segmentation results are compared to a human-made annotation, known as ground truth, by means of different measures of similarity. The evaluation depends, therefore, on three main points. First, the image segmentation techniques we evaluate. We review the state of the art in image segmentation, making an explicit difference between those techniques that provide a flat output, that is, a single clustering of the set of pixels into regions; and those that produce a hierarchical segmentation, that is, a tree-like structure that represents regions at different scales from the details to the whole image. Second, ground-truth databases are of paramount importance in the evaluation. They can be divided into those annotated only at object level, that is, with marked sets of pixels that refer to objects that do not cover the whole image; or those with annotated full partitions, which provide a full clustering of all pixels in an image. Depending on the type of database, we say that the analysis is done from an object perspective or from a partition perspective. Finally, the similarity measures used to compare the generated results to the ground truth are what will provide us with a quantitative tool to evaluate whether our results are good, and in which way they can be improved. The main contributions of the first parts of the thesis are in the field of the similarity measures. First of all, from an object perspective, we review the used basic measures to compare two object representations and show that some of them are equivalent. In order to evaluate full partitions and hierarchies against an object, one needs to select which of their regions form the object to be assessed. We review and improve these techniques by means of a mathematical model of the problem. This analysis allows us to show that hierarchies can represent objects much better with much less number of regions than flat partitions. From a partition perspective, the literature about evaluation measures is large and entangled. Our first contribution is to review, structure, and deduplicate the measures available. We provide a new measure that we show that improves previous ones in terms of a set of qualitative and quantitative meta-measures. We also extend the measures on flat partitions to cover hierarchical segmentations. The second part of this Thesis moves from the evaluation of image segmentation to its application to object detection. In particular, we build on some of the conclusions extracted in the first part to generate segmented object candidates. Given a set of hierarchies, we build the pairs and triplets of regions, we learn to combine the set from each hierarchy, and we rank them using low-level and mid-level cues. We conduct an extensive experimental validation that show that our method outperforms the state of the art in many metrics tested.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Pons, Rodríguez Gerard. „Computer-aided lesion detection and segmentation on breast ultrasound“. Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/129453.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the detection, segmentation and classification of lesions on sonography. The contribution of the thesis is the development of a new Computer-Aided Diagnosis (CAD) framework capable of detecting, segmenting, and classifying breast abnormalities on sonography automatically. Firstly, an adaption of a generic object detection method, Deformable Part Models (DPM), to detect lesions in sonography is proposed. The method uses a machine learning technique to learn a model based on Histogram of Oriented Gradients (HOG). This method is also used to detect cancer lesions directly, simplifying the traditional cancer detection pipeline. Secondly, different initialization proposals by means of reducing the human interaction in a lesion segmentation algorithm based on Markov Random Field (MRF)-Maximum A Posteriori (MAP) framework is presented. Furthermore, an analysis of the influence of lesion type in the segmentation results is performed. Finally, the inclusion of elastography information in this segmentation framework is proposed, by means of modifying the algorithm to incorporate a bivariant formulation. The proposed methods in the different stages of the CAD framework are assessed using different datasets, and comparing the results with the most relevant methods in the state-of-the-art
Aquesta tesi es centra en la detecció, segmentació i classificació de lesions en imatges d'ecografia. La contribució d'aquesta tesi és el desenvolupament d'una nova eina de Diagnòstic Assistit per Ordinador (DAO) capaç de detectar, segmentar i classificar automàticament lesions en imatges d'ecografia de mama. Inicialment, s'ha proposat l'adaptació del mètode genèric de detecció d'objectes Deformable Part Models (DPM) per detectar lesions en imatges d'ecografia. Aquest mètode utilitza tècniques d'aprenentatge automàtic per generar un model basat en l'Histograma de Gradients Orientats. Aquest mètode també és utilitzat per detectar lesions malignes directament, simplificant així l'estratègia tradicional. A continuació, s'han realitzat diferents propostes d'inicialització en un mètode de segmentació basat en Markov Random Field (MRF)-Maximum A Posteriori (MAP) per tal de reduir la interacció amb l'usuari. Per avaluar aquesta proposta, s'ha realitzat un estudi sobre la influència del tipus de lesió en els resultats aconseguits. Finalment, s'ha proposat la inclusió d'elastografia en aquesta estratègia de segmentació. Els mètodes proposats per a cada etapa de l'eina DAO han estat avaluats fent servir bases de dades diferents, comparant els resultats obtinguts amb els resultats dels mètodes més importants de l'estat de l'art
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Dursun, Mustafa. „Road Detection By Mean Shift Segmentation And Structural Analysis“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614443/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Road extraction from satellite or aerial images is a popular issue in remote sensing. Extracted road maps or networks can be used in various applications. Normally, maps for roads are available in geographic information systems (GIS), however these informations are not being produced automatically. Generally they are formed with the aid of human. Road extraction algorithms are trying to detect the roads from satellite or aerial images with the minimum in-teraction of human. Aim of this thesis is to analyze a previously defined algorithm about road extraction and to present alternatives and possible improvements to this algorithm. The base-line algorithm and proposed alternative algorithm and steps are based on mean-shift segmen-tation procedure. Proposed alternative methods are generally based on structural features of the image. Firstly, fundamental definitions of applied algorithms and methods are explained, mathematical definitions and visual examples are given for better understanding. Then, the chosen baseline algorithm and its alternatives are explained in detail. After the presentation of alternative methods, experimental results and inferences which are obtained during the implementation and analysis of mentioned algorithms and methods are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Zhai, Yun. „VIDEO CONTENT EXTRACTION: SCENE SEGMENTATION, LINKING AND ATTENTION DETECTION“. Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4007.

Der volle Inhalt der Quelle
Annotation:
In this fast paced digital age, a vast amount of videos are produced every day, such as movies, TV programs, personal home videos, surveillance video, etc. This places a high demand for effective video data analysis and management techniques. In this dissertation, we have developed new techniques for segmentation, linking and understanding of video scenes. Firstly, we have developed a video scene segmentation framework that segments the video content into story units. Then, a linking method is designed to find the semantic correlation between video scenes/stories. Finally, to better understand the video content, we have developed a spatiotemporal attention detection model for videos. Our general framework for temporal scene segmentation, which is applicable to several video domains, is formulated in a statistical fashion and uses the Markov chain Monte Carlo (MCMC) technique to determine the boundaries between video scenes. In this approach, a set of arbitrary scene boundaries are initialized at random locations and are further automatically updated using two types of updates: diffusion and jumps. The posterior probability of the target distribution of the number of scenes and their corresponding boundary locations are computed based on the model priors and the data likelihood. Model parameter updates are controlled by the MCMC hypothesis ratio test, and samples are collected to generate the final scene boundaries. The major contribution of the proposed framework is two-fold: (1) it is able to find weak boundaries as well as strong boundaries, i.e., it does not rely on the fixed threshold; (2) it can be applied to different video domains. We have tested the proposed method on two video domains: home videos and feature films. On both of these domains we have obtained very accurate results, achieving on the average of 86% precision and 92% recall for home video segmentation, and 83% precision and 83% recall for feature films. The video scene segmentation process divides videos into meaningful units. These segments (or stories) can be further organized into clusters based on their content similarities. In the second part of this dissertation, we have developed a novel concept tracking method, which links news stories that focus on the same topic across multiple sources. The semantic linkage between the news stories is reflected in the combination of both their visual content and speech content. Visually, each news story is represented by a set of key frames, which may or may not contain human faces. The facial key frames are linked based on the analysis of the extended facial regions, and the non-facial key frames are correlated using the global matching. The textual similarity of the stories is expressed in terms of the normalized textual similarity between the keywords in the speech content of the stories. The developed framework has also been applied to the task of story ranking, which computes the interestingness of the stories. The proposed semantic linking framework and the story ranking method have both been tested on a set of 60 hours of open-benchmark video data (CNN and ABC news) from the TRECVID 2003 evaluation forum organized by NIST. Above 90% system precision has been achieved for the story linking task. The combination of both visual and speech cues has boosted the un-normalized recall by 15%. We have developed PEGASUS, a content based video retrieval system with fast speech and visual feature indexing and search. The system is available on the web: http://pegasus.cs.ucf.edu:8080/index.jsp. Given a video sequence, one important task is to understand what is present or what is happening in its content. To achieve this goal, target objects or activities need to be detected, localized and recognized in either the spatial and/or temporal domain. In the last portion of this dissertation, we present a visual attention detection method, which automatically generates the spatiotemporal saliency maps of input video sequences. The saliency map is later used in the detections of interesting objects and activities in videos by significantly narrowing the search range. Our spatiotemporal visual attention model generates the saliency maps based on both the spatial and temporal signals in the video sequences. In the temporal attention model, motion contrast is computed based on the planar motions (homography) between images, which are estimated by applying RANSAC on point correspondences in the scene. To compensate for the non-uniformity of the spatial distribution of interest-points, spanning areas of motion segments are incorporated in the motion contrast computation. In the spatial attention model, we have developed a fast method for computing pixel-level saliency maps using color histograms of images. Finally, a dynamic fusion technique is applied to combine both the temporal and spatial saliency maps, where temporal attention is dominant over the spatial model when large motion contrast exists, and vice versa. The proposed spatiotemporal attention framework has been extensively applied on multiple video sequences to highlight interesting objects and motions present in the sequences. We have achieved 82% user satisfactory rate on the point-level attention detection and over 92% user satisfactory rate on the object-level attention detection.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Schwarz, Christopher. „Detection, Segmentation, and Pose Recognition of Hands in Images“. Honors in the Major Thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1004.

Der volle Inhalt der Quelle
Annotation:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Riste-Smith, Robert. „Edge detection and knowledge based segmentation of medical radiographs“. Thesis, University of Portsmouth, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303486.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Schofield, Andrew John. „Neural network models for texture segmentation and target detection“. Thesis, Keele University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358048.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

CAMPOS, VANESSA DE OLIVEIRA. „MULTICRITERION SEGMENTATION FOR LUNG NODULE DETECTION IN COMPUTED TOMOGRAPHY“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16423@1.

Der volle Inhalt der Quelle
Annotation:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Este trabalho propõe um novo algoritmo de segmentação baseado em crescimento de regiões para detecção de nódulos pulmonares em imagens de tomografia computadorizada. Para decidir, em cada iteração, se dois objetos adjacentes são fundidos em um único objeto, o algoritmo de segmentação calcula um índice de heterogeneidade baseada em múltiplos critérios. Entretanto, o algoritmo de segmentação depende de alguns parâmetros os quais foram encontrados utilizando algoritmo genético. Resultados experimentais mostraram que o método é robusto e promissor (chegando a uma sensibilidade de 80,9 % com 0,23 falsos positivos por exame). Além disso, indicam que o método proposto é capaz de fornecer um bom suporte para o diagnóstico do especialista.
This study proposes a novel segmentation algorithm for lung nodules detection in thorax computed tomography (CT). In order to decide, at each iteration, whether two adjacent objects should be merged or not, a region growing procedure calculates a heterogeneity growth based on multiple criteria. However, segmentation algorithm depends on some parameters which were found by genetic algorithm. Results produced by the proposed segmentation were closely consistent with the reference segments provided manually by an expert physician. The detection itself achieved 80,9% sensitivity with 0,23 false positive per slice which indicates that the proposed method is able to provide a good suggestion for the specialist. Results indicate the potential of proposed segmentation method and encourage a further investigation aiming at improving its accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Espis, Andrea. „Object detection and semantic segmentation for assisted data labeling“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Den vollen Inhalt der Quelle finden
Annotation:
The automation of data labeling tasks is a solution to the errors and time costs related to human labeling. In this thesis work CenterNet, DeepLabV3, and K-Means applied to the RGB color space, are deployed to build a pipeline for Assisted data labeling: a semi-automatic process to iteratively improve the quality of the annotations. The proposed pipeline pointed out a total of 1547 wrong and missing annotations when applied to a dataset originally containing 8,300 annotations. Moreover, the quality of each annotation has been drastically improved, and at the same time, more than 600 hours of work have been saved. The same models have also been used to address the real-time Tire inspection task, regarding the detection of markers on the surface of tires. According to the experiments, the combination of DeepLabV3 output and post-processing based on the area and shape of the predicted blobs, achieves a maximum of mean Precision 0.992, with mean Recall 0.982, and a maximum of mean Recall 0.998, with mean Precision 0.960.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ghadiri, Farnoosh. „Human shape modelling for carried object detection and segmentation“. Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30948.

Der volle Inhalt der Quelle
Annotation:
La détection des objets transportés est un des prérequis pour développer des systèmes qui cherchent à comprendre les activités impliquant des personnes et des objets. Cette thèse présente de nouvelles méthodes pour détecter et segmenter les objets transportés dans des vidéos de surveillance. Les contributions sont divisées en trois principaux chapitres. Dans le premier chapitre, nous introduisons notre détecteur d’objets transportés, qui nous permet de détecter un type générique d’objets. Nous formulons la détection d’objets transportés comme un problème de classification de contours. Nous classifions le contour des objets mobiles en deux classes : objets transportés et personnes. Un masque de probabilités est généré pour le contour d’une personne basé sur un ensemble d’exemplaires (ECE) de personnes qui marchent ou se tiennent debout de différents points de vue. Les contours qui ne correspondent pas au masque de probabilités généré sont considérés comme des candidats pour être des objets transportés. Ensuite, une région est assignée à chaque objet transporté en utilisant la Coupe Biaisée Normalisée (BNC) avec une probabilité obtenue par une fonction pondérée de son chevauchement avec l’hypothèse du masque de contours de la personne et du premier plan segmenté. Finalement, les objets transportés sont détectés en appliquant une Suppression des Non-Maxima (NMS) qui élimine les scores trop bas pour les objets candidats. Le deuxième chapitre de contribution présente une approche pour détecter des objets transportés avec une méthode innovatrice pour extraire des caractéristiques des régions d’avant-plan basée sur leurs contours locaux et l’information des super-pixels. Initiallement, un objet bougeant dans une séquence vidéo est segmente en super-pixels sous plusieurs échelles. Ensuite, les régions ressemblant à des personnes dans l’avant-plan sont identifiées en utilisant un ensemble de caractéristiques extraites de super-pixels dans un codebook de formes locales. Ici, les régions ressemblant à des humains sont équivalentes au masque de probabilités de la première méthode (ECE). Notre deuxième détecteur d’objets transportés bénéficie du nouveau descripteur de caractéristiques pour produire une carte de probabilité plus précise. Les compléments des super-pixels correspondants aux régions ressemblant à des personnes dans l’avant-plan sont considérés comme une carte de probabilité des objets transportés. Finalement, chaque groupe de super-pixels voisins avec une haute probabilité d’objets transportés et qui ont un fort support de bordure sont fusionnés pour former un objet transporté. Finalement, dans le troisième chapitre, nous présentons une méthode pour détecter et segmenter les objets transportés. La méthode proposée adopte le nouveau descripteur basé sur les super-pixels pour iii identifier les régions ressemblant à des objets transportés en utilisant la modélisation de la forme humaine. En utilisant l’information spatio-temporelle des régions candidates, la consistance des objets transportés récurrents, vus dans le temps, est obtenue et sert à détecter les objets transportés. Enfin, les régions d’objets transportés sont raffinées en intégrant de l’information sur leur apparence et leur position à travers le temps avec une extension spatio-temporelle de GrabCut. Cette étape finale sert à segmenter avec précision les objets transportés dans les séquences vidéo. Nos méthodes sont complètement automatiques, et font des suppositions minimales sur les personnes, les objets transportés, et les les séquences vidéo. Nous évaluons les méthodes décrites en utilisant deux ensembles de données, PETS 2006 et i-Lids AVSS. Nous évaluons notre détecteur et nos méthodes de segmentation en les comparant avec l’état de l’art. L’évaluation expérimentale sur les deux ensembles de données démontre que notre détecteur d’objets transportés et nos méthodes de segmentation surpassent de façon significative les algorithmes compétiteurs.
Detecting carried objects is one of the requirements for developing systems that reason about activities involving people and objects. This thesis presents novel methods to detect and segment carried objects in surveillance videos. The contributions are divided into three main chapters. In the first, we introduce our carried object detector which allows to detect a generic class of objects. We formulate carried object detection in terms of a contour classification problem. We classify moving object contours into two classes: carried object and person. A probability mask for person’s contours is generated based on an ensemble of contour exemplars (ECE) of walking/standing humans in different viewing directions. Contours that are not falling in the generated hypothesis mask are considered as candidates for carried object contours. Then, a region is assigned to each carried object candidate contour using Biased Normalized Cut (BNC) with a probability obtained by a weighted function of its overlap with the person’s contour hypothesis mask and segmented foreground. Finally, carried objects are detected by applying a Non-Maximum Suppression (NMS) method which eliminates the low score carried object candidates. The second contribution presents an approach to detect carried objects with an innovative method for extracting features from foreground regions based on their local contours and superpixel information. Initially, a moving object in a video frame is segmented into multi-scale superpixels. Then human-like regions in the foreground area are identified by matching a set of extracted features from superpixels against a codebook of local shapes. Here the definition of human like regions is equivalent to a person’s probability map in our first proposed method (ECE). Our second carried object detector benefits from the novel feature descriptor to produce a more accurate probability map. Complement of the matching probabilities of superpixels to human-like regions in the foreground are considered as a carried object probability map. At the end, each group of neighboring superpixels with a high carried object probability which has strong edge support is merged to form a carried object. Finally, in the third contribution we present a method to detect and segment carried objects. The proposed method adopts the new superpixel-based descriptor to identify carried object-like candidate regions using human shape modeling. Using spatio-temporal information of the candidate regions, consistency of recurring carried object candidates viewed over time is obtained and serves to detect carried objects. Last, the detected carried object regions are refined by integrating information of their appearances and their locations over time with a spatio-temporal extension of GrabCut. This final stage is used to accurately segment carried objects in frames. Our methods are fully automatic, and make minimal assumptions about a person, carried objects and videos. We evaluate the aforementioned methods using two available datasets PETS 2006 and i-Lids AVSS. We compare our detector and segmentation methods against a state-of-the-art detector. Experimental evaluation on the two datasets demonstrates that both our carried object detection and segmentation methods significantly outperform competing algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Marte, Otto-Carl. „Model driven segmentation and the detection of bone fractures“. Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/6414.

Der volle Inhalt der Quelle
Annotation:
Bibliography: leaves 83-90.
The introduction of lower dosage image acquisition devices and the increase in computational power means that there is an increased focus on producing diagnostic aids for the medical trauma environment. The focus of this research is to explore whether geometric criteria can be used to detect bone fractures from Computed Tomography data. Conventional image processing of CT data is aimed at the production of simple iso-surfaces for surgical planning or diagnosis - such methods are not suitable for the automated detection of fractures. Our hypothesis is that through a model-based technique a triangulated surface representing the bone can be speedily and accurately produced. And, that there is sufficient structural information present that by examining the geometric structure of this representation we can accurately detect bone fractures. In this dissertation we describe the algorithms and framework that we built to facilitate the detection of bone fractures and evaluate the validity of our approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lo, Wing Sze. „Statistics-based Chinese word segmentation and new word detection /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20LOW.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 83-86). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ullah, Habib. „Crowd Motion Analysis: Segmentation, Anomaly Detection, and Behavior Classification“. Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/369001.

Der volle Inhalt der Quelle
Annotation:
The objective of this doctoral study is to develop efficient techniques for flow segmentation, anomaly detection, and behavior classification in crowd scenes. Considering the complexities of occlusion, we focused our study on gathering the motion information at a higher scale, thus not associating it to single objects, but considering the crowd as a single entity. Firstly,we propose methods for flow segmentation based on correlation features, graph cut, Conditional Random Fields (CRF), enthalpy model, and particle mutual influence model. Secondly, methods based on deviant orientation information, Gaussian Mixture Model (GMM), and MLP neural network combined with GoodFeaturesToTrack are proposed to detect two types of anomalies. The first one detects deviant motion of the pedestrians compared to what has been observed beforehand. The second one detects panic situation by adopting the GMM and MLP to learn the behavior of the motion features extracted from a grid of particles and GoodFeaturesToTrack, respectively. Finally, we propose particle-driven and hybrid appraoches to classify the behaviors of crowd in terms of lane, arch/ring, bottleneck, blocking and fountainhead within a region of interest (ROI). For this purpose, the particle-driven approach extracts and fuses spatio-temporal features together. The spatial features represent the density of neighboring particles in the predefined proximity, whereas the temporal features represent the rendering of trajectories traveled by the particles. The hybrid approach exploits a thermal diffusion process combined with an extended variant of the social force model (SFM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ullah, Habib. „Crowd Motion Analysis: Segmentation, Anomaly Detection, and Behavior Classification“. Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1406/1/PhD_Thesis_Habib.pdf.

Der volle Inhalt der Quelle
Annotation:
The objective of this doctoral study is to develop efficient techniques for flow segmentation, anomaly detection, and behavior classification in crowd scenes. Considering the complexities of occlusion, we focused our study on gathering the motion information at a higher scale, thus not associating it to single objects, but considering the crowd as a single entity. Firstly,we propose methods for flow segmentation based on correlation features, graph cut, Conditional Random Fields (CRF), enthalpy model, and particle mutual influence model. Secondly, methods based on deviant orientation information, Gaussian Mixture Model (GMM), and MLP neural network combined with GoodFeaturesToTrack are proposed to detect two types of anomalies. The first one detects deviant motion of the pedestrians compared to what has been observed beforehand. The second one detects panic situation by adopting the GMM and MLP to learn the behavior of the motion features extracted from a grid of particles and GoodFeaturesToTrack, respectively. Finally, we propose particle-driven and hybrid appraoches to classify the behaviors of crowd in terms of lane, arch/ring, bottleneck, blocking and fountainhead within a region of interest (ROI). For this purpose, the particle-driven approach extracts and fuses spatio-temporal features together. The spatial features represent the density of neighboring particles in the predefined proximity, whereas the temporal features represent the rendering of trajectories traveled by the particles. The hybrid approach exploits a thermal diffusion process combined with an extended variant of the social force model (SFM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lung-Yut-Fong, Alexandre. „Evaluation of Kernel Methods for Change Detection and Segmentation : Application to Audio Onset Detection“. Thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-98274.

Der volle Inhalt der Quelle
Annotation:

Finding changes in a signal is a pervasive topic in signal processing. Through the example of audio onset detection to which we apply an online framework, we evaluate the ability of a class of machine learning techniques to solve this task.

The goal of this thesis is to review and evaluate some kernel methods for thetwo-sample problem (one-class Support Vector Machine, Maximum MeanDiscrepancy and Kernel Fisher Discriminant Analysis) on the change detection task, by benchmarking our proposed framework on a set of annotated audio files to which we can compare our results to the ground-truth onset times.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Sarkaar, Ajit Bhikamsingh. „Addressing Occlusion in Panoptic Segmentation“. Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/101988.

Der volle Inhalt der Quelle
Annotation:
Visual recognition tasks have witnessed vast improvements in performance since the advent of deep learning. Despite the gains in performance, image understanding algorithms are still not completely robust to partial occlusion. In this work, we propose a novel object classification method based on compositional modeling and explore its effect in the context of the newly introduced panoptic segmentation task. The panoptic segmentation task combines both semantic and instance segmentation to perform labelling of the entire image. The novel classification method replaces the object detection pipeline in UPSNet, a Mask R-CNN based design for panoptic segmentation. We also discuss an issue with the segmentation mask prediction of Mask R-CNN that affects overlapping instances. We perform extensive experiments and showcase results on the complex COCO and Cityscapes datasets. The novel classification method shows promising results for object classification on occluded instances in complex scenes.
24
Visual recognition tasks have witnessed vast improvements in performance since the advent of deep learning. Despite making significant improvements, algorithms for these tasks still do not perform well at recognizing partially visible objects in the scene. In this work, we propose a novel object classification method that uses compositional models to perform part based detection. The method first looks at individual parts of an object in the scene and then makes a decision about its identity. We test the proposed method in the context of the recently introduced panoptic segmentation task. The panoptic segmentation task combines both semantic and instance segmentation to perform labelling of the entire image. The novel classification method replaces the object detection module in UPSNet, a Mask R-CNN based algorithm for panoptic segmentation. We also discuss an issue with the segmentation mask prediction of Mask R-CNN that affects overlapping instances. After performing extensive experiments and evaluation, it can be seen that the novel classification method shows promising results for object classification on occluded instances in complex scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Feng, Sitao. „Evaluation of Red Colour Segmentation Algorithms in Traffic Signs Detection“. Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-5806.

Der volle Inhalt der Quelle
Annotation:
Colour segmentation is the most commonly used method in road signs detection. Road sign contains several basic colours such as red, yellow, blue and white which depends on countries.The objective of this thesis is to do an evaluation of the four colour segmentation algorithms. Dynamic Threshold Algorithm, A Modification of de la Escalera’s Algorithm, the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm. The processing time and segmentation success rate as criteria are used to compare the performance of the four algorithms. And red colour is selected as the target colour to complete the comparison. All the testing images are selected from the Traffic Signs Database of Dalarna University [1] randomly according to the category. These road sign images are taken from a digital camera mounted in a moving car in Sweden.Experiments show that the Fuzzy Colour Segmentation Algorithm and Shadow and Highlight Invariant Algorithm are more accurate and stable to detect red colour of road signs. And the method could also be used in other colours analysis research. The yellow colour which is chosen to evaluate the performance of the four algorithms can reference Master Thesis of Yumei Liu.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Losch, Max. „Detection and Segmentation of Brain Metastases with Deep Convolutional Networks“. Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173519.

Der volle Inhalt der Quelle
Annotation:
As deep convolutional networks (ConvNets) reach spectacular results on a multitude of computer vision tasks and perform almost as well as a human rater on the task of segmenting gliomas in the brain, I investigated the applicability for detecting and segmenting brain metastases. I trained networks with increasing depth to improve the detection rate and introduced a border-pair-scheme to reduce oversegmentation. A constraint on the time for segmenting a complete brain scan required the utilization of fully convolutional networks which reduced the time from 90 minutes to 40 seconds. Despite some present noise and label errors in the 490 full brain MRI scans, the final network achieves a true positive rate of 82.8% and 0.05 misclassifications per slice where all lesions greater than 3 mm have a perfect detection score. This work indicates that ConvNets are a suitable approach to both detect and segment metastases, especially as further architectural extensions might improve the predictive performance even more.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Ott, Patrick. „Segmentation features, visibility modeling and shared parts for object detection“. Thesis, University of Leeds, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.581947.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates the problem of object localization in still images and is separated into three individual parts. The first part proposes a new set of feature descriptors, motivated by the problem of pedestrian detection. Sliding window classifiers, notably using the Histogram-of-Gradient (HOG) features proposed by Dalal & Triggs are the state-of-the-art for this task, and we base our method on this approach. We propose a novel feature extraction scheme which computes implicit 'soft segmentations' of image regions into foreground/background. The method yields stronger object/background edges than gray-scale gradient alone, suppresses textural and shading variations, and captures local coherence of object appearance. The main contributions of this part are: (i) incorporation of segmentation cues into object detection; (ii) integration with classifier learning c.f. a post-processing filter and (iii) high computational efficiency. The second part of the thesis considers deformable part-based models (DPM) as proposed by Felzenszwalb et al. These models have demonstrated state-of-the-art results in object localization and offer a high degree of learnt invariance by utilizing viewpoint- dependent mixture components and movable parts in each mixture component. One might hope to increase the accuracy of the DPM by increasing the number of mixture components and parts to give a more faithful model, but limited training data prevents this from being effective. We propose an extension to the DPM which allows for sharing of object part models among multiple mixture components as well as object classes. This results in more compact models and allows training examples to be shared by multiple components, ameliorating the effect of a limited size training set. We (i) reformulate the DPM to incorporate part sharing, and (ii) propose a novel energy function allowing for coupled training of mixture components and object classes. An 'elephant in the room' for most current methods is the lack of explicit modeling of partial visibility due to occlusion by other objects or truncation by the image boundary. In the third part of this thesis, we propose a method which explicitly models partial visibility by treating it as a latent variable. As a second contribution we propose a novel non- maximum suppression scheme which takes into account partial visibility of objects while, in contrast to other methods, providing a globally optimal solution. Our method gives more detailed scene interpretations, in that we are able to identify the visible parts of an object. We evaluate all methods on the PASCAL VOC 2010 dataset. In addition, we report state-of-the-art results on the INRIAPerson pedestrian detection dataset for the first part, considerably exceeding those of the original HOG detector.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Liang, Kung-Hao. „From uncertainty to adaptivity : multiscale edge detection and image segmentation“. Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/57578/.

Der volle Inhalt der Quelle
Annotation:
This thesis presents the research on two different tasks in computer vision: edge detection and image segmentation (including texture segmentation and motion field segmentation). The central issue of this thesis is the uncertainty of the joint space-frequency image analysis, which motivates the design of the adaptive multiscale/multiresolution schemes for edge detection and image segmentation. Edge detectors capture most of the local features in an image, including the object boundaries and the details of surface textures. Apart from these edge features, the region properties of surface textures and motion fields are also important for segmenting an image into disjoint regions. The major theoretical achievements of this thesis are twofold. First, a scale parameter for the local processing of an image (e.g. edge detection) is proposed. The corresponding edge behaviour in the scale space, referred to as Bounded Diffusion, is the basis of a multiscale edge detector where the scale is adjusted adaptively according to the local noise level. Second, an adaptive multiresolution clustering scheme is proposed for texture segmentation (referred to as Texture Focusing) and motion field segmentation. In this scheme, the central regions of homogeneous textures (motion fields) are analysed using coarse resolutions so as to achieve a better estimation of the textural content (optical flow), and the border region of a texture (motion field) is analysed using fine resolutions so as to achieve a better estimation of the boundary between textures (moving objects). Both of the above two achievements are the logical consequences of the uncertainty principle. Four algorithms, including a roof edge detector, a multiscale step edge detector, a texture segmentation scheme and a motion field segmentation scheme are proposed to address various aspects of edge detection and image segmentation. These algorithms have been implemented and extensively evaluated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Christogiannopoulos, Georgios. „Detection, segmentation and tracking of moving individuals in cluttered scenes“. Thesis, University of Sussex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444374.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wang, Chen. „2D object detection and semantic segmentation in the Carla simulator“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291337.

Der volle Inhalt der Quelle
Annotation:
The subject of self-driving car technology has drawn growing interest in recent years. Many companies, such as Baidu and Tesla, have already introduced automatic driving techniques in their newest cars when driving in a specific area. However, there are still many challenges ahead toward fully autonomous driving cars. Tesla has caused several severe accidents when using autonomous driving functions, which makes the public doubt self-driving car technology. Therefore, it is necessary to use the simulator environment to help verify and perfect algorithms for the perception, planning, and decision-making of autonomous vehicles before implementation in real-world cars. This project aims to build a benchmark for implementing the whole self-driving car system in software. There are three main components including perception, planning, and control in the entire autonomous driving system. This thesis focuses on two sub-tasks 2D object detection and semantic segmentation in the perception part. All of the experiments will be tested in a simulator environment called The CAR Learning to Act(Carla), which is an open-source platform for autonomous car research. Carla simulator is developed based on the game engine(Unreal4). It has a server-client system, which provides a flexible python API. 2D object detection uses the You only look once(Yolov4) algorithm that contains the tricks of the latest deep learning techniques from the aspect of network structure and data augmentation to strengthen the network’s ability to learn the object. Yolov4 achieves higher accuracy and short inference time when comparing with the other popular object detection algorithms. Semantic segmentation uses Efficient networks for Computer Vision(ESPnetv2). It is a light-weight and power-efficient network, which achieves the same performance as other semantic segmentation algorithms by using fewer network parameters and FLOPS. In this project, Yolov4 and ESPnetv2 are implemented into the Carla simulator. Two modules work together to help the autonomous car understand the world. The minimal distance awareness application is implemented into the Carla simulator to detect the distance to the ahead vehicles. This application can be used as a basic function to avoid the collision. Experiments are tested by using a single Nvidia GPU(RTX2060) in Ubuntu 18.0 system.
Ämnet självkörande bilteknik har väckt intresse de senaste åren. Många företag, som Baidu och Tesla, har redan infört automatiska körtekniker i sina nyaste bilar när de kör i ett specifikt område. Det finns dock fortfarande många utmaningar inför fullt autonoma bilar. Detta projekt syftar till att bygga ett riktmärke för att implementera hela det självkörande bilsystemet i programvara. Det finns tre huvudkomponenter inklusive uppfattning, planering och kontroll i hela det autonoma körsystemet. Denna avhandling fokuserar på två underuppgifter 2D-objekt detektering och semantisk segmentering i uppfattningsdelen. Alla experiment kommer att testas i en simulatormiljö som heter The CAR Learning to Act (Carla), som är en öppen källkodsplattform  för autonom bilforskning. Du ser bara en gång (Yolov4) och effektiva nätverk för datorvision (ESPnetv2) implementeras i detta projekt för att uppnå Funktioner för objektdetektering och semantisk segmentering. Den minimala distans medvetenhets applikationen implementeras i Carla-simulatorn för att upptäcka avståndet till de främre bilarna. Denna applikation kan användas som en grundläggande funktion för att undvika kollisionen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Kim, Soowon. „Computational architecture for the detection and segmentation of coherent motion /“. The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487946776021216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lidayová, Kristína. „Fast Methods for Vascular Segmentation Based on Approximate Skeleton Detection“. Doctoral thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-318796.

Der volle Inhalt der Quelle
Annotation:
Modern medical imaging techniques have revolutionized health care over the last decades, providing clinicians with high-resolution 3D images of the inside of the patient's body without the need for invasive procedures. Detailed images of the vascular anatomy can be captured by angiography, providing a valuable source of information when deciding whether a vascular intervention is needed, for planning treatment, and for analyzing the success of therapy. However, increasing level of detail in the images, together with a wide availability of imaging devices, lead to an urgent need for automated techniques for image segmentation and analysis in order to assist the clinicians in performing a fast and accurate examination. To reduce the need for user interaction and increase the speed of vascular segmentation,  we propose a fast and fully automatic vascular skeleton extraction algorithm. This algorithm first analyzes the volume's intensity histogram in order to automatically adapt the internal parameters to each patient and then it produces an approximate skeleton of the patient's vasculature. The skeleton can serve as a seed region for subsequent surface extraction algorithms. Further improvements of the skeleton extraction algorithm include the expansion to detect the skeleton of diseased arteries and the design of a convolutional neural network classifier that reduces false positive detections of vascular cross-sections. In addition to the complete skeleton extraction algorithm, the thesis presents a segmentation algorithm based on modified onion-kernel region growing. It initiates the growing from the previously extracted skeleton and provides a rapid binary segmentation of tubular structures. To provide the possibility of extracting precise measurements from this segmentation we introduce a method for obtaining a segmentation with subpixel precision out of the binary segmentation and the original image. This method is especially suited for thin and elongated structures, such as vessels, since it does not shrink the long protrusions. The method supports both 2D and 3D image data. The methods were validated on real computed tomography datasets and are primarily intended for applications in vascular segmentation, however, they are robust enough to work with other anatomical tree structures after adequate parameter adjustment, which was demonstrated on an airway-tree segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ma, Tianyang. „Graph-based Inference with Constraints for Object Detection and Segmentation“. Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/231622.

Der volle Inhalt der Quelle
Annotation:
Computer and Information Science
Ph.D.
For many fundamental problems of computer vision, adopting a graph-based framework can be straight-forward and very effective. In this thesis, I propose several graph-based inference methods tailored for different computer vision applications. It starts from studying contour-based object detection methods. In particular, We propose a novel framework for contour based object detection, by replacing the hough-voting framework with finding dense subgraph inference. Compared to previous work, we propose a novel shape matching scheme suitable for partial matching of edge fragments. The shape descriptor has the same geometric units as shape context but our shape representation is not histogram based. The key contribution is that we formulate the grouping of partial matching hypotheses to object detection hypotheses is expressed as maximum clique inference on a weighted graph. Consequently, each detection result not only identifies the location of the target object in the image, but also provides a precise location of its contours, since we transform a complete model contour to the image. We achieve very competitive results on ETHZ dataset, obtained in a pure shape-based framework, demonstrate that our method achieves not only accurate object detection but also precise contour localization on cluttered background. Similar to the task of grouping of partial matches in the contour-based method, in many computer vision problems, we would like to discover certain pattern among a large amount of data. For instance, in the application of unsupervised video object segmentation, where we need automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. An efficient algorithm is applied to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods. We also show that the same maximum weight subgraph with mutex constraints formulation can be used to solve various computer vision problems, such as points matching, solving image jigsaw puzzle, and detecting object using 3D contours.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Wu, Xinheng. „A Deep Unsupervised Anomaly Detection Model for Automated Tumor Segmentation“. Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22502.

Der volle Inhalt der Quelle
Annotation:
Many researches have been investigated to provide the computer aided diagnosis (CAD) automated tumor segmentation in various medical images, e.g., magnetic resonance (MR), computed tomography (CT) and positron-emission tomography (PET). The recent advances in automated tumor segmentation have been achieved by supervised deep learning (DL) methods trained on large labelled data to cover tumor variations. However, there is a scarcity in such training data due to the cost of labeling process. Thus, with insufficient training data, supervised DL methods have difficulty in generating effective feature representations for tumor segmentation. This thesis aims to develop an unsupervised DL method to exploit large unlabeled data generated during clinical process. Our assumption is unsupervised anomaly detection (UAD) that, normal data have constrained anatomy and variations, while anomalies, i.e., tumors, usually differ from the normality with high diversity. We demonstrate our method for automated tumor segmentation on two different image modalities. Firstly, given that bilateral symmetry in normal human brains and unsymmetry in brain tumors, we propose a symmetric-driven deep UAD model using GAN model to model the normal symmetric variations thus segmenting tumors by their being unsymmetrical. We evaluated our method on two benchmarked datasets. Our results show that our method outperformed the state-of-the-art unsupervised brain tumor segmentation methods and achieved competitive performance to the supervised segmentation methods. Secondly, we propose a multi-modal deep UAD model for PET-CT tumor segmentation. We model a manifold of normal variations shared across normal CT and PET pairs; this manifold representing the normal pairing that can be used to segment the anomalies. We evaluated our method on two PET-CT datasets and the results show that we outperformed the state-of-the-art unsupervised methods, supervised methods and baseline fusion techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie