Dissertations / Theses on the topic 'EDGE DETECTION MODELS'

To see the other types of publications on this topic, follow the link: EDGE DETECTION MODELS.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'EDGE DETECTION MODELS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Parekh, Siddharth Avinash. "A comparison of image processing algorithms for edge detection, corner detection and thinning." University of Western Australia. Centre for Intelligent Information Processing Systems, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0073.

Full text
Abstract:
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow
APA, Harvard, Vancouver, ISO, and other styles
2

Rathnayaka, Mudiyanselage Kanchana. "3D reconstruction of long bones utilising magnetic resonance imaging (MRI)." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49779/1/Kanchana_Rathnayaka_Mudiyanselage_Thesis.pdf.

Full text
Abstract:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramesh, Visvanathan. "Model for precise detection of bone edges." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/40957.

Full text
Abstract:

A mathematical model which is used to detect bone edges accurately is described in this thesis. This model is derived by assuming the X-ray source to be a square region. It is shown that for an ideal X-ray source (point source), the bone edge lies exactly at the location of maximum first derivative of the imaged objectâ s transmission function. However, for the non-ideal case, it is shown that the bone edge does not lie at the maximum first derivative location. Also, it is shown that an offset can be calculated from the edge parameters. The Marr- Hildreth edge detector is used to detect the initial estimates for edge location. Precise estimates are obtained by using the facet model. The offset is then cal- V culated and applied to these estimates.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Bilen, Burak. "Model Based Building Extraction From High Resolution Aerial Images." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12604984/index.pdf.

Full text
Abstract:
A method for detecting the buildings from high resolution aerial images is proposed. The aim is to extract the buildings from high resolution aerial images using the Hough transform and the model based perceptual grouping techniques.The edges detected from the image are the basic structures used in the building detection procedure. The method proposed in this thesis makes use of the basic image processing techniques. Noise removal and image sharpening techniques are used to enhance the input image. Then, the edges are extracted from the image using the Canny edge detection algorithm. The edges obtained are composed of discrete points. These discrete points are vectorized in order to generate straight line segments. This is performed with the use of the Hough transform and the perceptual grouping techniques. The straight line segments become the basic structures of the buildings. Finally, the straight line segments are grouped based on predefined model(s) using the model based perceptual grouping technique. The groups of straight line segments are the candidates for 2D structures that may be the buildings, the shadows or other man-made objects. The proposed method was implemented with a program written in C programming language. The approach was applied to several study areas. The results achieved are encouraging. The number of the extracted buildings increase if the orientation of the buildings are nearly the same and the Canny edge detector detects most of the building edges.If the buildings have different orientations,some of the buildings may not be extracted with the proposed method. In addition to building orientation, the building size and the parameters used in the Hough transform and the perceptual grouping stages also affect the success of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
5

Mickum, George S. "Development of a dedicated hybrid K-edge densitometer for pyroprocessing safeguards measurements using Monte Carlo simulation models." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54358.

Full text
Abstract:
Pyroprocessing is an electrochemical method for recovering actinides from used nuclear fuel and recycling them into fresh nuclear fuel. It is posited herein that proposed safeguards approaches on pyroprocessing for nuclear material control and accountability face several challenges due to the unproven plutonium-curium inseparability argument and the limitations of neutron counters. Thus, the Hybrid K-Edge Densitometer is currently being investigated as an assay tool for the measurement of pyroprocessing materials in order to perform effective safeguards. This work details the development of a computational model created using the Monte Carlo N-Particle code to reproduce HKED assay of samples expected from the pyroprocesses. The model incorporates detailed geometrical dimensions of the Oak Ridge National Laboratory HKED system, realistic detector pulse height spectral responses, optimum computational efficiency, and optimization capabilities. The model has been validated on experimental data representative of samples from traditional reprocessing solutions and then extended to the sample matrices and actinide concentrations of pyroprocessing. Data analysis algorithms were created in order to account for unsimulated spectral characteristics and correct inaccuracies in the simulated results. The realistic assay results obtained with the model have provided insight into the extension of the HKED technique to pyroprocessing safeguards and reduced the calibration and validation efforts in support of that design study. Application of the model has allowed for a detailed determination of the volume of the sample being actively irradiated as well as provided a basis for determining the matrix effects from the pyroprocessing salts on the HKED assay spectra.
APA, Harvard, Vancouver, ISO, and other styles
6

Pálka, Zbyněk. "Detekce automobilů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218828.

Full text
Abstract:
This thesis dissert on traffic monitoring. There are couple of different methods of background extraction and four methods vehicle detection described here. Furthermore there is one method that describes vehicle counting. All of these methods was realized in Matlab where was created graphical user interface. One whole chapter is dedicated to process of practical realization. All methods are compared by set of testing videos. These videos are resulting in statistics which diagnoses about efficiency of single one method.
APA, Harvard, Vancouver, ISO, and other styles
7

Wesolkowski, Slawomir. "Color Image Edge Detection and Segmentation: A Comparison of the Vector Angle and the Euclidean Distance Color Similarity Measures." Thesis, University of Waterloo, 1999. http://hdl.handle.net/10012/937.

Full text
Abstract:
This work is based on Shafer's Dichromatic Reflection Model as applied to color image formation. The color spaces RGB, XYZ, CIELAB, CIELUV, rgb, l1l2l3, and the new h1h2h3 color space are discussed from this perspective. Two color similarity measures are studied: the Euclidean distance and the vector angle. The work in this thesis is motivated from a practical point of view by several shortcomings of current methods. The first problem is the inability of all known methods to properly segment objects from the background without interference from object shadows and highlights. The second shortcoming is the non-examination of the vector angle as a distance measure that is capable of directly evaluating hue similarity without considering intensity especially in RGB. Finally, there is inadequate research on the combination of hue- and intensity-based similarity measures to improve color similarity calculations given the advantages of each color distance measure. These distance measures were used for two image understanding tasks: edge detection, and one strategy for color image segmentation, namely color clustering. Edge detection algorithms using Euclidean distance and vector angle similarity measures as well as their combinations were examined. The list of algorithms is comprised of the modified Roberts operator, the Sobel operator, the Canny operator, the vector gradient operator, and the 3x3 difference vector operator. Pratt's Figure of Merit is used for a quantitative comparison of edge detection results. Color clustering was examined using the k-means (based on the Euclidean distance) and Mixture of Principal Components (based on the vector angle) algorithms. A new quantitative image segmentation evaluation procedure is introduced to assess the performance of both algorithms. Quantitative and qualitative results on many color images (artificial, staged scenes and natural scene images) indicate good edge detection performance using a vector version of the Sobel operator on the h1h2h3 color space. The results using combined hue- and intensity-based difference measures show a slight improvement qualitatively and over using each measure independently in RGB. Quantitative and qualitative results for image segmentation on the same set of images suggest that the best image segmentation results are obtained using the Mixture of Principal Components algorithm on the RGB, XYZ and rgb color spaces. Finally, poor color clustering results in the h1h2h3 color space suggest that some assumptions in deriving a simplified version of the Dichromatic Reflectance Model might have been violated.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Chenguang. "Low level feature detection in SAR images." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT015.

Full text
Abstract:
Dans cette thèse, nous développons des détecteurs de caractéristiques de bas niveau pour les images radar à synthèse d'ouverture (SAR) afin de faciliter l'utilisation conjointe des données SAR et optiques. Les segments de droite et les bords sont des caractéristiques de bas niveau très importantes dans les images qui peuvent être utilisées pour de nombreuses applications comme l'analyse ou le stockage d'images, ainsi que la détection d'objets. Alors qu'il existe de nombreux détecteurs efficaces pour les structures bas-niveau dans les images optiques, il existe très peu de détecteurs de ce type pour les images SAR, principalement en raison du fort bruit multiplicatif. Dans cette thèse, nous développons un détecteur de segment de droite générique et un détecteur de bords efficace pour les images SAR. Le détecteur de segment de droite proposé, nommé LSDSAR, est basé sur un modèle Markovien a contrario et le principe de Helmholtz, où les segments de droite sont validés en fonction d'une mesure de significativité. Plus précisément, un segment de droite est validé si son nombre attendu d'occurrences dans une image aléatoire sous l'hypothèse du modèle Markovien a contrario est petit. Contrairement aux approches habituelles a contrario, le modèle Markovien a contrario permet un filtrage fort dans l'étape de calcul du gradient, car les dépendances entre les orientations locales des pixels voisins sont autorisées grâce à l'utilisation d'une chaîne de Markov de premier ordre. Le détecteur de segments de droite basé sur le modèle Markovian a contrario proposé LSDSAR, bénéficie de la précision et l'efficacité de la nouvelle définition du modèle de fond, car de nombreux segments de droite vraie dans les images SAR sont détectés avec un contrôle du nombre de faux détections. De plus, très peu de réglages de paramètres sont requis dans les applications pratiques de LSDSAR.Dans la deuxième partie de cette thèse, nous proposons un détecteur de bords basé sur l'apprentissage profond pour les images SAR. Les contributions du détecteur de bords proposé sont doubles: 1) sous l'hypothèse que les images optiques et les images SAR réelles peuvent être divisées en zones constantes par morceaux, nous proposons de simuler un ensemble de données SAR à l'aide d'un ensemble de données optiques; 2) Nous proposons d'appliquer un réseaux de neurones convolutionnel classique, HED, directement sur les champs de magnitude des images. Ceci permet aux images de test SAR d'avoir des statistiques semblables aux images optiques en entrée du réseau. Plus précisément, la distribution du gradient pour toutes les zones homogènes est la même et la distribution du gradient pour deux zones homogènes à travers les frontières ne dépend que du rapport de leur intensité moyenne valeurs. Le détecteur de bords proposé, GRHED permet d'améliorer significativement l'état de l'art, en particulier en présence de fort bruit (images 1-look)
In this thesis we develop low level feature detectors for Synthetic Aperture Radar (SAR) images to facilitate the joint use of SAR and optical data. Line segments and edges are very important low level features in images which can be used for many applications like image analysis, image registration and object detection. Contrarily to the availability of many efficient low level feature detectors dedicated to optical images, there are very few efficient line segment detector and edge detector for SAR images mostly because of the strong multiplicative noise. In this thesis we develop a generic line segment detector and an efficient edge detector for SAR images.The proposed line segment detector which is named as LSDSAR, is based on a Markovian a contrario model and the Helmholtz principle, where line segments are validated according to their meaningfulness. More specifically, a line segment is validated if its expected number of occurences in a random image under the hypothesis of the Markovian a contrario model is small. Contrarily to the usual a contrario approaches, the Markovian a contrario model allows strong filtering in the gradient computation step, since dependencies between local orientations of neighbouring pixels are permitted thanks to the use of a first order Markov chain. The proposed Markovian a contrario model based line segment detector LSDSAR benefit from the accuracy and efficiency of the new definition of the background model, indeed, many true line segments in SAR images are detected with a control of the number of false detections. Moreover, very little parameter tuning is required in the practical applications of LSDSAR. The second work of this thesis is that we propose a deep learning based edge detector for SAR images. The contributions of the proposed edge detector are two fold: 1) under the hypothesis that both optical images and real SAR images can be divided into piecewise constant areas, we propose to simulate a SAR dataset using optical dataset; 2) we propose to train a classical CNN (convolutional neural network) edge detector, HED, directly on the graident fields of images. This, by using an adequate method to compute the gradient, enables SAR images at test time to have statistics similar to the training set as inputs to the network. More precisely, the gradient distribution for all homogeneous areas are the same and the gradient distribution for two homogeneous areas across boundaries depends only on the ratio of their mean intensity values. The proposed method, GRHED, significantly improves the state-of-the-art, especially in very noisy cases such as 1-look images
APA, Harvard, Vancouver, ISO, and other styles
9

Oldham, Kevin M. "Table tennis event detection and classification." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19626.

Full text
Abstract:
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Kozina, Lubomír. "Detekce a počítání automobilů v obraze (videodetekce)." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218382.

Full text
Abstract:
In this master’s thesis on the topic Videodetection - traffic monitoring I was engaged in searching moving objects in traffic images sequence. There are described various methods background model computation and moving vehicles marking, counting or velocity calculating in the thesis. It was created a graphical user interface for traffic scene evaluation in MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
11

Richards, Mark Andrew. "An intuitive motion-based input model for mobile devices." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16556/.

Full text
Abstract:
Traditional methods of input on mobile devices are cumbersome and difficult to use. Devices have become smaller, while their operating systems have become more complex, to the extent that they are approaching the level of functionality found on desktop computer operating systems. The buttons and toggle-sticks currently employed by mobile devices are a relatively poor replacement for the keyboard and mouse style user interfaces used on their desktop computer counterparts. For example, when looking at a screen image on a device, we should be able to move the device to the left to indicate we wish the image to be panned in the same direction. This research investigates a new input model based on the natural hand motions and reactions of users. The model developed by this work uses the generic embedded video cameras available on almost all current-generation mobile devices to determine how the device is being moved and maps this movement to an appropriate action. Surveys using mobile devices were undertaken to determine both the appropriateness and efficacy of such a model as well as to collect the foundational data with which to build the model. Direct mappings between motions and inputs were achieved by analysing users' motions and reactions in response to different tasks. Upon the framework being completed, a proof of concept was created upon the Windows Mobile Platform. This proof of concept leverages both DirectShow and Direct3D to track objects in the video stream, maps these objects to a three-dimensional plane, and determines device movements from this data. This input model holds the promise of being a simpler and more intuitive method for users to interact with their mobile devices, and has the added advantage that no hardware additions or modifications are required the existing mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
12

BUENO, REGIS C. "Detecção de contornos em imagens de padrões de escoamento bifásico com alta fração de vazio em experimentos de circulação natural com o uso de processamento inteligente." reponame:Repositório Institucional do IPEN, 2016. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26817.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-11-11T13:03:47Z No. of bitstreams: 0
Made available in DSpace on 2016-11-11T13:03:47Z (GMT). No. of bitstreams: 0
Este trabalho desenvolveu um novo método para a detecção de contornos em imagens digitais que apresentam objetos de interesse muito próximos e que contêm complexidades associadas ao fundo da imagem como variação abrupta de intensidade e oscilação de iluminação. O método desenvolvido utiliza lógicafuzzy e desvio padrão da declividade (Desvio padrão da declividade fuzzy - FuzDec) para o processamento de imagens e detecção de contorno. A detecção de contornos é uma tarefa importante para estimar características de escoamento bifásico através da segmentação da imagem das bolhas para obtenção de parâmetros como a fração de vazio e diâmetro de bolhas. FuzDec foi aplicado em imagens de instabilidades de circulação natural adquiridas experimentalmente. A aquisição das imagens foi feita utilizando o Circuito de Circulação Natural (CCN) do Instituto de Pesquisas Energéticas e Nucleares (IPEN). Este circuito é completamente constituído de tubos de vidro, o que permite a visualização e imageamento do escoamento monofásico e bifásico nos ciclos de circulação natural sob baixa pressão.Os resultados mostraram que o detector proposto conseguiu melhorar a identificação do contorno eficientemente em comparação aos detectores de contorno clássicos, sem a necessidade de fazer uso de algoritmos de suavização e sem intervenção humana.
t
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
13

Richards, Mark Andrew. "An intuitive motion-based input model for mobile devices." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16556/1/Mark_Richards_Thesis.pdf.

Full text
Abstract:
Traditional methods of input on mobile devices are cumbersome and difficult to use. Devices have become smaller, while their operating systems have become more complex, to the extent that they are approaching the level of functionality found on desktop computer operating systems. The buttons and toggle-sticks currently employed by mobile devices are a relatively poor replacement for the keyboard and mouse style user interfaces used on their desktop computer counterparts. For example, when looking at a screen image on a device, we should be able to move the device to the left to indicate we wish the image to be panned in the same direction. This research investigates a new input model based on the natural hand motions and reactions of users. The model developed by this work uses the generic embedded video cameras available on almost all current-generation mobile devices to determine how the device is being moved and maps this movement to an appropriate action. Surveys using mobile devices were undertaken to determine both the appropriateness and efficacy of such a model as well as to collect the foundational data with which to build the model. Direct mappings between motions and inputs were achieved by analysing users' motions and reactions in response to different tasks. Upon the framework being completed, a proof of concept was created upon the Windows Mobile Platform. This proof of concept leverages both DirectShow and Direct3D to track objects in the video stream, maps these objects to a three-dimensional plane, and determines device movements from this data. This input model holds the promise of being a simpler and more intuitive method for users to interact with their mobile devices, and has the added advantage that no hardware additions or modifications are required the existing mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
14

Devillard, François. "Vision du robot mobile Mithra." Grenoble INPG, 1993. http://www.theses.fr/1993INPG0112.

Full text
Abstract:
Nous proposons un ensemble de vision stereoscopique embarque, destine a la navigation d'un robot mobile en site industriel. En robotique mobile, les systemes de vision sont soumis a de severes contraintes de fonctionnement (traitement en temps reel, volume, consommation. . . ). Pour une modelisation 3D de l'environnement, le systeme de vision doit utiliser des indices visuels permettant un codage compact, precis et robuste de la scene observee. Afin de repondre au mieux aux contraintes de vitesse, nous nous sommes attaches a extraire, des images, les informations les plus significatives d'un point de vue topologique. Dans le cas de missions en sites industriels, l'ensemble des projets presente des geometries orthogonales telles que les intersections de cloisons, les portes, les fenetres, le mobilier. . . La detection des geometries proches de la verticale permet une definition suffisante de l'environnement tout en reduisant la redondance des informations visuelles dans des proportions satisfaisantes. Les indices utilises sont des segments de droite verticaux extraits de deux images stereoscopiques. Nous proposons des solutions algorithmiques pour la detection de contours et l'approximation polygonale adaptees a une implementation temps reel. Ensuite, nous presentons le systeme de vision realise. L'ensemble est constitue de 2 cartes VME. La premiere carte est un operateur cable systolique implementant l'acquisition d'images et la detection de contours. La seconde est concue a partir d'un processeur de traitement de signal et realise l'approximation polygonale. La conception et la realisation de ce systeme de vision a ete realisee dans le cadre du projet de robotique mobile EUREKA EU 110 (Mithra)
APA, Harvard, Vancouver, ISO, and other styles
15

Mattsson, Per, and Andreas Eriksson. "Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1141.

Full text
Abstract:

This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations.

Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To reduce the computation time needed for the segmentation process, a seeded region growing technique is used to make an initial model of the artery. The final segmentation is based on the inflatable balloon model, which deforms the initial model to fit the ultrasound data. The balloon model is implemented with the finite element method.

The segmentation technique produces 3D models that are intended as pre-planning tools for surgeons. The results from a healthy person are satisfactory and the results from a patient with stenosis seem rather promising. A novel 4D model of wall motion of the Carotid vessels has also been obtained. From this model, 3D compliance measures can easily be obtained.

APA, Harvard, Vancouver, ISO, and other styles
16

Li, Yunming. "Machine vision algorithms for mining equipment automation." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

"A novel sub-pixel edge detection algorithm: with applications to super-resolution and edge sharpening." 2013. http://library.cuhk.edu.hk/record=b5884269.

Full text
Abstract:
Lee, Hiu Fung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 80-82).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
18

McIlhagga, William H., and K. A. May. "Optimal edge filters explain human blur detection." 2012. http://hdl.handle.net/10454/6091.

Full text
Abstract:
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur.
APA, Harvard, Vancouver, ISO, and other styles
19

Tesfamariam, Ermias Beyene. "Distributed processing of large remote sensing images using MapReduce - A case of Edge Detection." Master's thesis, 2011. http://hdl.handle.net/10362/8279.

Full text
Abstract:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Advances in sensor technology and their ever increasing repositories of the collected data are revolutionizing the mechanisms remotely sensed data are collected, stored and processed. This exponential growth of data archives and the increasing user’s demand for real-and near-real time remote sensing data products has pressurized remote sensing service providers to deliver the required services. The remote sensing community has recognized the challenge in processing large and complex satellite datasets to derive customized products. To address this high demand in computational resources, several efforts have been made in the past few years towards incorporation of high-performance computing models in remote sensing data collection, management and analysis. This study adds an impetus to these efforts by introducing the recent advancements in distributed computing technologies, MapReduce programming paradigm, to the area of remote sensing. The MapReduce model which is developed by Google Inc. encapsulates the efforts of distributed computing in a highly simplified single library. This simple but powerful programming model can provide us distributed environment without having deep knowledge of parallel programming. This thesis presents a MapReduce based processing of large satellite images a use case scenario of edge detection methods. Deriving from the conceptual massive remote sensing image processing applications, a prototype of edge detection methods was implemented on MapReduce framework using its open-source implementation, the Apache Hadoop environment. The experiences of the implementation of the MapReduce model of Sobel, Laplacian, and Canny edge detection methods are presented. This thesis also presents the results of the evaluation the effect of parallelization using MapReduce on the quality of the output and the execution time performance tests conducted based on various performance metrics. The MapReduce algorithms were executed on a test environment on heterogeneous cluster that supports the Apache Hadoop open-source software. The successful implementation of the MapReduce algorithms on a distributed environment demonstrates that MapReduce has a great potential for scaling large-scale remotely sensed images processing and perform more complex geospatial problems.
APA, Harvard, Vancouver, ISO, and other styles
20

KAUSHIK, RAVI. "PLANT DISEASE DETECTION USING IMAGE SEGMENTATION & CONVOLUTIONAL NEURAL NETWORK." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16913.

Full text
Abstract:
Identifying regions in an image and labeling them to class is called image segmentation. Automatic image segmentation has been one of the major research areas which is in trend nowadays. Every other day a new model is being discovered to do better image segmentation for the task of computer vision. As the better a computer is able to see, the better we can automate the tasks around our daily life. In this survey we are comparing various image segmentation techniques and on the basis of our research we are applying the best approach to an application i.e. developing a model to identify diseased plants and to give an idea to the people what kind disease is present in a plant. The detailed analysis of the methodology is done with the help of various analysis techniques, which are used in reference to the context of the work. Our focus is on the techniques which we are able to optimize and make them better than the one which are present before. This survey emphasizes on the importance of application of image segmentation techniques and to make them more useful for the common public in daily life. So that they get benefits of this technology in the monitoring of any activity occurring around that can’t be done manually.
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Yu-Ying, and 林予應. "A Hierarchical Poisson Model for Community Detection of Undirected Single-edge Graphs." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/x3b78d.

Full text
Abstract:
碩士
國立交通大學
統計學研究所
105
As graph data gain great popularity in the last decade, network analysis becomes an important research work. What’s more, in the big data era, han- dling large graph is the next challenge. Many researches have been working on exploring graph data starting by community detection. Poisson Model provides a di erent view of point to carry out the detection by assuming the raw networks are multi-edged. Although some weight information is missing and only single edges are observed, we design a mechanism to estimate the weight. Then we assume each node has a feature of propensity to connect to other nodes and take advantage of the fast optimization technics of Ball et al. for parameter estimation. In a sense, our model can be regarded as a generalized Ball et al.’s model. Conditional EM algorithm is applied to carry out the estimation. Next, AICc is served as our model selection criteria for choosing number of groups. The computational complexity is O(N2K). Ac- cording to the results of synthesized and real data, our method is e ective and fast. Compared to other optimization algorithm, the required number of iteration of EM algorithm is relatively fewer, therefore having a potential to be applied to large graphs.
APA, Harvard, Vancouver, ISO, and other styles
22

Dron, Lisa. "The Multi-Scale Veto Model: A Two-Stage Analog Network for Edge Detection and Image Reconstruction." 1992. http://hdl.handle.net/1721.1/5981.

Full text
Abstract:
This paper presents the theory behind a model for a two-stage analog network for edge detection and image reconstruction to be implemented in VLSI. Edges are detected in the first stage using the multi-scale veto rule, which eliminates candidates that do not pass a threshold test at each of a set of different spatial scales. The image is reconstructed in the second stage from the brightness values adjacent to edge locations. The MSV rule allows good localization and efficient noise removal. Since the reconstructed images are visually similar to the originals, the possibility exists of achieving significant bandwidth compression.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Sung-Po, and 林松柏. "Traditional , Wavelet and Active ontour Model in Edge Detection Analysis of Wave Image." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/05573979348824619069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ganin, Iaroslav. "Natural image processing and synthesis using deep learning." Thèse, 2019. http://hdl.handle.net/1866/23437.

Full text
Abstract:
Nous étudions dans cette thèse comment les réseaux de neurones profonds peuvent être utilisés dans différents domaines de la vision artificielle. La vision artificielle est un domaine interdisciplinaire qui traite de la compréhension d’images et de vidéos numériques. Les problèmes de ce domaine ont traditionnellement été adressés avec des méthodes ad-hoc nécessitant beaucoup de réglages manuels. En effet, ces systèmes de vision artificiels comprenaient jusqu’à récemment une série de modules optimisés indépendamment. Cette approche est très raisonnable dans la mesure où, avec peu de données, elle bénéficient autant que possible des connaissances du chercheur. Mais cette avantage peut se révéler être une limitation si certaines données d’entré n’ont pas été considérées dans la conception de l’algorithme. Avec des volumes et une diversité de données toujours plus grands, ainsi que des capacités de calcul plus rapides et économiques, les réseaux de neurones profonds optimisés d’un bout à l’autre sont devenus une alternative attrayante. Nous démontrons leur avantage avec une série d’articles de recherche, chacun d’entre eux trouvant une solution à base de réseaux de neurones profonds à un problème d’analyse ou de synthèse visuelle particulier. Dans le premier article, nous considérons un problème de vision classique: la détection de bords et de contours. Nous partons de l’approche classique et la rendons plus ‘neurale’ en combinant deux étapes, la détection et la description de motifs visuels, en un seul réseau convolutionnel. Cette méthode, qui peut ainsi s’adapter à de nouveaux ensembles de données, s’avère être au moins aussi précis que les méthodes conventionnelles quand il s’agit de domaines qui leur sont favorables, tout en étant beaucoup plus robuste dans des domaines plus générales. Dans le deuxième article, nous construisons une nouvelle architecture pour la manipulation d’images qui utilise l’idée que la majorité des pixels produits peuvent d’être copiés de l’image d’entrée. Cette technique bénéficie de plusieurs avantages majeurs par rapport à l’approche conventionnelle en apprentissage profond. En effet, elle conserve les détails de l’image d’origine, n’introduit pas d’aberrations grâce à la capacité limitée du réseau sous-jacent et simplifie l’apprentissage. Nous démontrons l’efficacité de cette architecture dans le cadre d’une tâche de correction du regard, où notre système produit d’excellents résultats. Dans le troisième article, nous nous éclipsons de la vision artificielle pour étudier le problème plus générale de l’adaptation à de nouveaux domaines. Nous développons un nouvel algorithme d’apprentissage, qui assure l’adaptation avec un objectif auxiliaire à la tâche principale. Nous cherchons ainsi à extraire des motifs qui permettent d’accomplir la tâche mais qui ne permettent pas à un réseau dédié de reconnaître le domaine. Ce réseau est optimisé de manière simultané avec les motifs en question, et a pour tâche de reconnaître le domaine de provenance des motifs. Cette technique est simple à implémenter, et conduit pourtant à l’état de l’art sur toutes les tâches de référence. Enfin, le quatrième article présente un nouveau type de modèle génératif d’images. À l’opposé des approches conventionnels à base de réseaux de neurones convolutionnels, notre système baptisé SPIRAL décrit les images en termes de programmes bas-niveau qui sont exécutés par un logiciel de graphisme ordinaire. Entre autres, ceci permet à l’algorithme de ne pas s’attarder sur les détails de l’image, et de se concentrer plutôt sur sa structure globale. L’espace latent de notre modèle est, par construction, interprétable et permet de manipuler des images de façon prévisible. Nous montrons la capacité et l’agilité de cette approche sur plusieurs bases de données de référence.
In the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography