Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Invariant Feature Transform.

Rozprawy doktorskie na temat „Invariant Feature Transform”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 43 najlepszych rozpraw doktorskich naukowych na temat „Invariant Feature Transform”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

May, Michael. "Data analytics and methods for improved feature selection and matching". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Pełny tekst źródła
Streszczenie:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Style APA, Harvard, Vancouver, ISO itp.
2

Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.

Pełny tekst źródła
Streszczenie:
L’objectif de cette thèse est de trouver de nouvelles méthodes de compression sémantique compatible avec un encodeur classique tel que H.264/AVC. . L’objectif principal est de maintenir la sémantique et non pas la qualité globale. Un débit cible de 300 kb/s a été fixé pour des applications de sécurité et de défense Pour cela une chaine complète de compression a dû être réalisée. Une étude et des contributions sur les modèles de saillance spatio-temporel ont été réalisées avec pour objectif d’extraire l’information pertinente. Pour réduire le débit, une méthode de redimensionnement dénommée «seam carving » a été combinée à un encodeur H.264/AVC. En outre, une métrique combinant les points SIFT et le SSIM a été réalisée afin de mesurer la qualité des objets sans être perturbée par les zones de moindre contenant la majorité des artefacts. Une base de données pouvant être utilisée pour des modèles de saillance mais aussi pour de la compression est proposée avec des masques binaires. Les différentes approches ont été validées par divers tests. Une extension de ces travaux pour des applications de résumé vidéo est proposée
The objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Style APA, Harvard, Vancouver, ISO itp.
3

Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.

Pełny tekst źródła
Streszczenie:
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
Style APA, Harvard, Vancouver, ISO itp.
4

Ljungberg, Malin. "Design of High Performance Computing Software for Genericity and Variability". Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7768.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Sahin, Yavuz. "A Programming Framework To Implement Rule-based Target Detection In Images". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610213/index.pdf.

Pełny tekst źródła
Streszczenie:
An expert system is useful when conventional programming techniques fall short of capturing human expert knowledge and making decisions using this information. In this study, we describe a framework for capturing expert knowledge under a decision tree form and this framework can be used for making decisions based on captured knowledge. The framework proposed in this study is generic and can be used to create domain specific expert systems for different problems. Features are created or processed by the nodes of decision tree and a final conclusion is reached for each feature. Framework supplies 3 types of nodes to construct a decision tree. First type is the decision node, which guides the search path with its answers. Second type is the operator node, which creates new features using the inputs. Last type of node is the end node, which corresponds to a conclusion about a feature. Once the nodes of the tree are developed, then user can interactively create the decision tree and run the supplied inference engine to collect the result on a specific problem. The framework proposed is experimented with two case studies
"
Airport Runway Detection in High Resolution Satellite Images"
and "
Urban Area Detection in High Resolution Satellite Images"
. In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.
Style APA, Harvard, Vancouver, ISO itp.
6

Murtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.

Pełny tekst źródła
Streszczenie:
La profondeur possible d’imagerie en laser-scanning microscopie est limitée non seulement par la distance de travail des lentilles de objectifs mais également par la dégradation de l’image causée par une atténuation et une diffraction de la lumière passant à travers l’échantillon. Afin d’étendre cette limite, il est possible, soit de retourner le spécimen pour enregistrer les images depuis chaque côté, or couper progressivement la partie supérieure de l’échantillon au fur et à mesure de l‘acquisition. Les différentes images prises de l’une de ces manières doivent ensuite être combinées pour générer un volume unique. Cependant, des mouvements de l’échantillon durant les procédures d’acquisition engendrent un décalage non seulement sur en translation selon les axes x, y et z mais également en rotation autour de ces même axes, rendant la fusion entres ces multiples images difficile. Nous avons développé une nouvelle approche appelée 2D-SIFT-in-3D-Space utilisant les SIFT (scale Invariant Feature Transform) pour atteindre un recalage robuste en trois dimensions de deux images. Notre méthode recale les images en corrigeant séparément les translations et rotations sur les trois axes grâce à l’extraction et l’association de caractéristiques stables de leurs coupes transversales bidimensionnelles. Pour évaluer la qualité du recalage, nous avons également développé un simulateur d’images de laser-scanning microscopie qui génère une paire d’images 3D virtuelle dans laquelle le niveau de bruit et les angles de rotations entre les angles de rotation sont contrôlés avec des paramètres connus. Pour une concaténation précise et naturelle de deux images, nous avons également développé un module permettant une compensation progressive de la luminosité et du contraste en fonction de la distance à la surface de l’échantillon. Ces outils ont été utilisés avec succès pour l’obtention d’images tridimensionnelles de haute résolution du cerveau de la mouche Drosophila melanogaster, particulièrement des neurones dopaminergiques, octopaminergiques et de leurs synapses. Ces neurones monoamines sont particulièrement important pour le fonctionnement du cerveau et une étude de leur réseau et connectivité est nécessaire pour comprendre leurs interactions. Si une évolution de leur connectivité au cours du temps n’a pas pu être démontrée via l’analyse de la répartition des sites synaptiques, l’étude suggère cependant que l’inactivation de l’un de ces types de neurones entraine des changements drastiques dans le réseau neuronal
Although laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Style APA, Harvard, Vancouver, ISO itp.
7

Eskizara, Omer. "3d Geometric Hashing Using Transform Invariant Features". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610546/index.pdf.

Pełny tekst źródła
Streszczenie:
3D object recognition is performed by using geometric hashing where transformation and scale invariant 3D surface features are utilized. 3D features are extracted from object surfaces after a scale space search where size of each feature is also estimated. Scale space is constructed based on orientation invariant surface curvature values which classify each surface point'
s shape. Extracted features are grouped into triplets and orientation invariant descriptors are defined for each triplet. Each pose of each object is indexed in a hash table using these triplets. For scale invariance matching, cosine similarity is applied for scale variant triple variables. Tests were performed on Stuttgart database where 66 poses of 42 objects are stored in the hash table during training and 258 poses of 42 objects are used during testing. %90.97 recognition rate is achieved.
Style APA, Harvard, Vancouver, ISO itp.
8

Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications". Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.

Pełny tekst źródła
Streszczenie:
Nous étudions ici l’intérêt des descripteurs locaux pour les images satellites optiques et radar. Ces descripteurs, par leurs invariances et leur représentation compacte, offrent un intérêt pour la comparaison d’images acquises dans des conditions différentes. Facilement applicables aux images optiques, ils offrent des performances limitées sur les images radar, en raison de leur fort bruit multiplicatif. Nous proposons ici un descripteur original pour la comparaison d’images radar. Cet algorithme, appelé SAR-SIFT, repose sur la même structure que l’algorithme SIFT (détection de points-clés et extraction de descripteurs) et offre des performances supérieures pour les images radar. Pour adapter ces étapes au bruit multiplicatif, nous avons développé un opérateur différentiel, le Gradient par Ratio, permettant de calculer une norme et une orientation du gradient robustes à ce type de bruit. Cet opérateur nous a permis de modifier les étapes de l’algorithme SIFT. Nous présentons aussi deux applications pour la télédétection basées sur les descripteurs. En premier, nous estimons une transformation globale entre deux images radar à l’aide de SAR-SIFT. L’estimation est réalisée à l’aide d’un algorithme RANSAC et en utilisant comme points homologues les points-clés mis en correspondance. Enfin nous avons mené une étude prospective sur l’utilisation des descripteurs pour la détection de changements en télédétection. La méthode proposée compare les densités de points-clés mis en correspondance aux densités de points-clés détectés pour mettre en évidence les zones de changement
We study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Style APA, Harvard, Vancouver, ISO itp.
9

Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.

Pełny tekst źródła
Streszczenie:
This thesis describes methods of reconstruction of 3D scenes from photographs and videos using the Structure from motion approach. A new software capable of automatic reconstruction of point clouds and polygonal models from common images and videos was implemented based on these methods. The software uses variety of existing and custom solutions and clearly links them into one easily executable application. The reconstruction consists of feature point detection, pairwise matching, Bundle adjustment, stereoscopic algorithms and polygon model creation from point cloud using PCL library. Program is based on Bundler and PMVS. Poisson surface reconstruction algorithm, as well as simple triangulation and own reconstruction method based on plane segmentation were used for polygonal model creation.
Style APA, Harvard, Vancouver, ISO itp.
10

Saravi, Sara. "Use of Coherent Point Drift in computer vision applications". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.

Pełny tekst źródła
Streszczenie:
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.
Style APA, Harvard, Vancouver, ISO itp.
11

Leoputra, Wilson Suryajaya. "Video foreground extraction for mobile camera platforms". Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/1384.

Pełny tekst źródła
Streszczenie:
Foreground object detection is a fundamental task in computer vision with many applications in areas such as object tracking, event identification, and behavior analysis. Most conventional foreground object detection methods work only in a stable illumination environments using fixed cameras. In real-world applications, however, it is often the case that the algorithm needs to operate under the following challenging conditions: drastic lighting changes, object shape complexity, moving cameras, low frame capture rates, and low resolution images. This thesis presents four novel approaches for foreground object detection on real-world datasets using cameras deployed on moving vehicles.The first problem addresses passenger detection and tracking tasks for public transport buses investigating the problem of changing illumination conditions and low frame capture rates. Our approach integrates a stable SIFT (Scale Invariant Feature Transform) background seat modelling method with a human shape model into a weighted Bayesian framework to detect passengers. To deal with the problem of tracking multiple targets, we employ the Reversible Jump Monte Carlo Markov Chain tracking algorithm. Using the SVM classifier, the appearance transformation models capture changes in the appearance of the foreground objects across two consecutives frames under low frame rate conditions. In the second problem, we present a system for pedestrian detection involving scenes captured by a mobile bus surveillance system. It integrates scene localization, foreground-background separation, and pedestrian detection modules into a unified detection framework. The scene localization module performs a two stage clustering of the video data.In the first stage, SIFT Homography is applied to cluster frames in terms of their structural similarity, and the second stage further clusters these aligned frames according to consistency in illumination. This produces clusters of images that are differential in viewpoint and lighting. A kernel density estimation (KDE) technique for colour and gradient is then used to construct background models for each image cluster, which is further used to detect candidate foreground pixels. Finally, using a hierarchical template matching approach, pedestrians can be detected.In addition to the second problem, we present three direct pedestrian detection methods that extend the HOG (Histogram of Oriented Gradient) techniques (Dalal and Triggs, 2005) and provide a comparative evaluation of these approaches. The three approaches include: a) a new histogram feature, that is formed by the weighted sum of both the gradient magnitude and the filter responses from a set of elongated Gaussian filters (Leung and Malik, 2001) corresponding to the quantised orientation, which we refer to as the Histogram of Oriented Gradient Banks (HOGB) approach; b) the codebook based HOG feature with branch-and-bound (efficient subwindow search) algorithm (Lampert et al., 2008) and; c) the codebook based HOGB approach.In the third problem, a unified framework that combines 3D and 2D background modelling is proposed to detect scene changes using a camera mounted on a moving vehicle. The 3D scene is first reconstructed from a set of videos taken at different times. The 3D background modelling identifies inconsistent scene structures as foreground objects. For the 2D approach, foreground objects are detected using the spatio-temporal MRF algorithm. Finally, the 3D and 2D results are combined using morphological operations.The significance of these research is that it provides basic frameworks for automatic large-scale mobile surveillance applications and facilitates many higher-level applications such as object tracking and behaviour analysis.
Style APA, Harvard, Vancouver, ISO itp.
12

Rahtu, E. (Esa). "A multiscale framework for affine invariant pattern recognition and registration". Doctoral thesis, University of Oulu, 2007. http://urn.fi/urn:isbn:9789514286018.

Pełny tekst źródła
Streszczenie:
Abstract This thesis presents a multiscale framework for the construction of affine invariant pattern recognition and registration methods. The idea in the introduced approach is to extend the given pattern to a set of affine covariant versions, each carrying slightly different information, and then to apply known affine invariants to each of them separately. The key part of the framework is the construction of the affine covariant set, and this is done by combining several scaled representations of the original pattern. The advantages compared to previous approaches include the possibility of many variations and the inclusion of spatial information on the patterns in the features. The application of the multiscale framework is demonstrated by constructing several new affine invariant methods using different preprocessing techniques, combination schemes, and final recognition and registration approaches. The techniques introduced are briefly described from the perspective of the multiscale framework, and further treatment and properties are presented in the corresponding original publications. The theoretical discussion is supported by several experiments where the new methods are compared to existing approaches. In this thesis the patterns are assumed to be gray scale images, since this is the main application where affine relations arise. Nevertheless, multiscale methods can also be applied to other kinds of patterns where an affine relation is present. An additional application of one multiscale based technique in convexity measurements is introduced. The method, called multiscale autoconvolution, can be used to build a convexity measure which is a descriptor of object shape. The proposed measure has two special features compared to existing approaches. It can be applied directly to gray scale images approximating binary objects, and it can be easily modified to produce a number of measures. The new measure is shown to be straightforward to evaluate for a given shape, and it performs well in the applications, as demonstrated by the experiments in the original paper.
Style APA, Harvard, Vancouver, ISO itp.
13

Kopečný, Josef. "Návrh nové metody pro stereovidění". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235903.

Pełny tekst źródła
Streszczenie:
This thesis covers with the problems of photogrammetry. It describes the instruments, theoretical background and procedures of acquiring, preprocessing, segmentation of input images and of the depth map calculating. The main content of this thesis is the description of the new method of stereovision. Its algorithm, implementation and evaluation of experiments. The covered method belongs to correlation based methods. The main emphasis lies in the segmentation, which supports the depth map calculation.
Style APA, Harvard, Vancouver, ISO itp.
14

Barreiros, João Carlos da Costa. "Fast Scale-Invariant Feature Transform on GPU". Master's thesis, 2020. http://hdl.handle.net/10316/93988.

Pełny tekst źródła
Streszczenie:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
Feature extraction of high-resolution images is a challenging procedure in low-power signal processing applications. This thesis describes how to optimize and efficiently parallelize the scale-invariant feature transform (SIFT) feature detection algorithm and maximize the use of bandwidth on the GPUsubsystem. Together with the minimization of data communications between host and device, the successful parallelization of all the main kernels used in SIFT allowed a global speedup in high-resolution images above 78x while being more than an order of magnitude energy efficient (FPS/W) than its serial counterpart. From the 3 GPUs tested, the low-power GPU has shown superior energy efficiency -- 44 FPS/W.‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎
Feature extraction of high-resolution images is a challenging procedure in low-power signal processing applications. This thesis describes how to optimize and efficiently parallelize the scale-invariant feature transform (SIFT) feature detection algorithm and maximize the use of bandwidth on the GPUsubsystem. Together with the minimization of data communications between host and device, the successful parallelization of all the main kernels used in SIFT allowed a global speedup in high-resolution images above 78x while being more than an order of magnitude energy efficient (FPS/W) than its serial counterpart. From the 3 GPUs tested, the low-power GPU has shown superior energy efficiency -- 44 FPS/W.‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎
Style APA, Harvard, Vancouver, ISO itp.
15

Huang, Ling-Hsuan, i 黃齡萱. "CBIR System with Scale-Invariant Feature Transform". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/30841024099347199111.

Pełny tekst źródła
Streszczenie:
碩士
國立宜蘭大學
資訊工程研究所碩士班
97
These years, with the development of Multimedia System and Computer Network, the number of digital image grows rapidly. The thesis mentions that CBIR (Content-based Image Retrieval) System with Scale-Invariant Feature Transform and match the assistance of Artificial Neural Network, in order to achieve the accuracy and efficiency of retrieval. For solving Semantic Gap of Content-based Image Retrieval, in this part of image feature analysis, this thesis choose characteristics of color and texture and combine local gray-level variant to obtain keypoints; these characteristics are scale-invariant and the quality of unchangeable rotation, although it can search information of keypoints easilier compared by images of scale or variation of rotation and through these keypoints to reduce the difference of word meaning to promote the accuracy of system retrieval.
Style APA, Harvard, Vancouver, ISO itp.
16

Tsai, Ruei-Jen, i 蔡睿烝. "Accelerating Scale-Invariant Feature Transform Using Graphic Processing Units". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/60537499609581683635.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣師範大學
科技應用與人力資源發展學系
101
Content-based image retrieval (CBIR) is the application of computer vision techniques to the searching for digital images from large databases using image actual contents such as colors, shapes, and textures rather than the metadata such as keywords, tags, and/or descriptions associated with the image. Many techniques of image processing and computer vision are applied to capture the image contents. Among them, the scale invariant features transform (SIFT) has been widely adopted in many applications, such as object recognition, image stitching, and stereo correspondence to extract and describe local features in images. In certain application such as CBIR, feature extraction is a preprocessing process and feature matching is the most computing-intensive process. Graphic Processing Units (GPUs) have attracted a lot of attention because of their dramatic power of parallel computing on massive data. In this thesis, we propose a GPU-based SIFT by accelerating linear search and K-Nearest Neighbor (KNN) on GPUs. The proposed approach achieves 22 times faster than the ordinary Nearest Neighbor (NN) performed on CPUs, and 11 times faster than the ordinary linear search and KNN performed on CPUs.
Style APA, Harvard, Vancouver, ISO itp.
17

Rajeev, Namburu. "Analysis of Palmprint and Palmvein Authentication Using Scale Invariant Feature Transform(SIFT) Features". Thesis, 2017. http://ethesis.nitrkl.ac.in/8803/1/2017_MT_N_Rajeev.pdf.

Pełny tekst źródła
Streszczenie:
Securing the information has been a major issue now a days and depending on the requirements and security reasons most of the authentication systems are moved from passcodes, pass cards to biometric systems where the metrics are derived from human features. Some of the major biometrics vastly used are Iris, fingerprint, voice recognition, face recognition. But there exists some other biometrics which can be used to increase the security level like palmvein pattern. For this project palmprint and palmvein patterns are selected because both the metrics need to be extracted from same region of palm. By applying Scale Invariant Feature Transform (SIFT) method on the biometrics palmprint and palmvein patterns we can analyze which metric is better and the efficiency in authentication by using different matching techniques. The aim of the project was to analyze the performance of SIFT on palmvein patterns and the palmprint to know which is more secure because even though both the metrics are extracted from the same region it is difficult to forge the palmvein pattern when compared to palm print.
Style APA, Harvard, Vancouver, ISO itp.
18

Hsieh, Chih-Hsiung, i 謝志雄. "Planer Object Detection Using Scale Invariant Feature Transform Accompanying with Generalized Hough Transform". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/y4u6bz.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
電資碩士在職專班研究所
102
We have seen wide range of applications, such as object detection and recognition systems, security monitoring systems, factory automation and detection systems, and video indexing systems on scale-invariant feature transform (SIFT) algorithm in recent years. Without a doubt, SIFT feature points present significant invariance and superiority with conditions such as scaling, rotation, slight perspective, and illumination changes in images. However, a certain degree of error is to be expected in feature point matching. SIFT is particularly less reliable in object detection when the textures or features of the test object are similar to or the same as those of other foreground objects. To address these errors in matching, researchers have proposed methods involving the Nearest Neighbor (NN), the Hough transform (HT), and RANSAC. However, experiments demonstrate that the voting method of the Hough transform can only slightly reduce errors and fails to overcome the problems caused by multiple objects having the same features or textures. These are combined with a model of reference points and edge points established with GHT. This allows for the detection of objects with unknown rotation changes, scale ratios, and irregular shapes. Our results prove that the proposed method improves the precision of object detection in experiments, and saves over 50% in computation time than the original method. In addition, the method achieves good stability in relevant experiments.
Style APA, Harvard, Vancouver, ISO itp.
19

Chang, Che-wei, i 張哲維. "A scale invariant feature transform based palm vein recognition system". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/76035923255925172095.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
資訊工程系
98
Biometrics is playing a more and more important role in modern society. From flash drives, notebooks, entrance guard systems to automatic teller machines, biometrics can be seen built-in applications within them. Palm vein recognition is arguably a burgeoning research emphasis on biometrics. Palm vein image contains rich information for identifying and authenticating, and it provides nice and accurate recognition rate. With the vantage that it can not be fabricated, it is becoming a new star of biometrics. A highly-growing market share can be expected. However, in our country, it is a pity that researches about palm vein recognition are rare. In our research, we focused on building a palm texture recognition system by using scale invariant feature transform. Scale invariant feature transform(SIFT) transforms captured palm vein images into distinctive feature points, and they can be compared and used for identifying people. The experimental result shows that it is ideal for being a biometrics system, and its future is promising.
Style APA, Harvard, Vancouver, ISO itp.
20

Chen, Pao-Feng, i 陳寶鳳. "Detection and Recognition of Road Signs Using Scale Invariant Feature Transform". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/99439764816390051879.

Pełny tekst źródła
Streszczenie:
碩士
元智大學
資訊管理研究所
93
This study describes an automatic road sign detection and recognition system by using scale invariant feature transform (SIFT). The method consists of two stages. In the detection stage, the relative position of road sign is located by using a priori knowledge, shape and specific color information. The shape feature is then used to reconstruct the road sign in the candidate region, and the road sign image is fully extracted from the original image for further recognition. In the recognition stage, distinctive invariant features are extracted from the road sign image by using SIFT to perform reliable matching. The recognition proceeds by matching individual features to a database of features from known road signs using the fast nearest-neighbor algorithm, a Hough transform for identifying clusters that agree on object pose, and finally performing verification through least-squares solution for consistent pose parameters. Experimental results demonstrate that most road signs can be correctly detected and recognized with an accuracy of 95.37%. Moreover, the extensive experiments have also shown that the proposed method is robust against the major difficulties of detecting and recognizing road signs such as image scaling and rotation, illumination change, partial occlusion, deformation, perspective distortion, and so on. The proposed approach can be very helpful for the development of Driver Support System and Intelligent Autonomous Vehicles to provide effective driving assistance.
Style APA, Harvard, Vancouver, ISO itp.
21

IRMAWULANDARI i IRMAWULANDARI. "Image Fusion Using the Scale Invariant Feature TRansform as Image Registration". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vk7fbc.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
資訊工程系研究所
100
Image fusion is the process of combining two or more images into a single image, which retains important features from each. Image fusion is one way to resolve the problem of un-focused images produced by non-professional camera users. Image fusion can be also used in remote sensing, robotics and medical application. In this thesis, a new image fusion technique for multi-focus images based on the SIFT (Scale Invariant Feature Transform) is proposed. The fusion procedure is performed by matching the image features of SIFT and then fusing two images by averaging that firstly decomposed using Discrete Wavelet Transform. Conditional sharpening is applied to get images better of quality. Experimental results show well in multi-focus image fusion.
Style APA, Harvard, Vancouver, ISO itp.
22

Jian-Wen, Chen, i 陳建文. "Dynamic Visual Tracking Using Scale-Invariant Feature Transform and Particle Filter". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/22327718769209427122.

Pełny tekst źródła
Streszczenie:
碩士
國立高雄應用科技大學
電機工程系碩士班
95
We propose an estimation model simultaneously containing translation, scaling, and rotation for affine transformation with the scale-invariant feature transform (SIFT) technique in the dynamic recognition application. Under the model assumption, it can effectively draw the suitable shape of a distortion target in the cluttered environment. The SIFT is an algorithm which searches the invariant feature via recording the information of orientations around the keypoint, and this method is insensitive to the change of the illumination or occlusion momentarily. In the tracking applications, our proposed algorithm is based on extended particle filter (EPF) approach utilizing prior distributions and posterior ones to estimate parameters of highly nonlinear system. To improve the tracker performance, particle filter combines the foreground-background absolute difference (FBAD) and SIFT to achieve the real time tracking and reliable recognition. Each particle represents a possible state with the associated weight of a measurable likelihood distribution. The estimation results are robust against light and shade changes, and implementation in real-time is plausible.
Style APA, Harvard, Vancouver, ISO itp.
23

Yang, Tzung-Da, i 楊宗達. "Scale-Invariant Feature Transform (SIFT) Based Iris Match Technology for Identity Identification". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/52714099795239015467.

Pełny tekst źródła
Streszczenie:
碩士
國立中興大學
電機工程學系所
105
Biometrics has been applied to the personal recognition popularly and it becomes more important. The iris recognition is one of the biometric identification methods, and the technology can provide the accurate personal recognition. As early as 2004, the German airport in Frankfurt began to use the iris identification system. By the iris scan identification, the iris information is linked to the passport data database, and the personal identity is functional. In recent years, the iris identification is used widely and increasingly in personal identifications. Even the mobile phone also begin to use the iris identification system, and the importance of biometrics gains more and more attention. The traditional iris recognition technology mainly transforms the iris feature region into a square matrix by using the polar coordinate method, and the square matrix is transformed to the feature codes, and then the signature is used to the feature match finally. The difference between the proposed and the traditional iris recognition systems is : to avoid the eyelid and eyelash interferences, the retrieved iris region in the proposed design only locates near the pupil around the ring area and the lower half of the iris area for recognitions. On the other side, the traditional iris identification uses the feature code matching technology; however, the proposed method uses the image feature matching technology, i.e. the scale-invariant feature transform (SIFT) method. The SIFT uses the local features of the image, and it keeps the feature invariance for the changes of rotation, scaling, and brightness. The SIFT also maintains a certain degree of stability for the change of the perspective affine transformation and noises. Therefore, it is very suitable that the SIFT technology is applied to iris feature matching. In the proposed design, the accuracy of the iris recognition is 95%. Compared with other methods by using the same database and the similar SIFT technology as the matching method, the recognition performance of the proposed design is suitable.
Style APA, Harvard, Vancouver, ISO itp.
24

Chen, Yu-wei, i 陳昱維. "A Geometry-Distortion Resistant Image Detection System Based on Log-Polar Transform and Scale Invariant Feature Transform". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/38443904983925420897.

Pełny tekst źródła
Streszczenie:
碩士
大同大學
資訊工程學系(所)
98
In many image detection systems, the detection results are superior to tamper distortion. However, the geometric distortions rearrange the feature positions, and this property often affects the results of feature comparison. In this thesis, the presented scheme aims at resisting the geometric distortions. The scheme contains the feature construction phase and the comparison phase. In the feature construction phase, the scheme extracts unique features from each protected image based on Log Polar Transform and Scale Invariant Feature Transform. In the comparison phase, the scheme extracts features from the suspect image to compare each protected image. Furthermore, this paper also focuses on similar image identification. There are two types of similar image that the scheme aims. The first type is that there are similar objects in two images. The second type is different view images. These two types of images are serious issue for feature comparison. Hence, this paper presents a scheme to solve this problem.
Style APA, Harvard, Vancouver, ISO itp.
25

Lin, Chih-Chang, i 林志展. "Implementation of an Object Security System based on Scale Invariant Feature Transform Algorithm". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/27472606492265476126.

Pełny tekst źródła
Streszczenie:
碩士
佛光大學
資訊學系
97
There has been a significant increase in the use of surveillance cameras in the past few years. Idyllically, the use of surveillance cameras and video monitoring systems can not only help altering their users before threatening situations getting worse, but also providing them with vital recorded evidences for security/safety events. However, one common shortcoming of traditional video surveillance systems is that they still need human operators to monitor surveillance cameras and to trace after-happening security/safety events from huge amount of video records. As more and more surveillance cameras are being mounted around our society to help stopping crime and protecting our properties, there are enormous needs of developing software solutions and other technologies to make those video surveillance systems smarter in order to streamline and automate their on-line monitoring and evidence retrieval processes. Intelligent video analysis mechanism (also known as video analytics) is a well known solution to make video surveillance systems smarter. Object recognition technologies in video analytics are usually refer to image processing algorithms that detect and track objects of interest to look for possible security/safety threats or breaches. Recently, Scale Invariant Feature Transform algorithm (SIFT) is recognized as a very useful method for video analytics applications due to its effectiveness in dealing with scale, illumination or position changes of the object of interest. In this research, a SIFT-based intelligent video surveillance system is proposed to help monitoring objects (valuable properties) display in open spaces. Once the proposed system detects abnormal or suspicious activities via video analytics, it will provide pre-caution warning or record only video of suspicious activity. In this intelligent system, Self Adaptive SIFT (SA-SIFT) algorithm, an improved version of the original SIFT algorithm is also proposed by adding mechanism for incessant updating the template of SIFT features and adjusting the region of interest. Such enhancements are designed to extend capability of the intelligent system in object recognition with motion and scene changes. The efficiency and effectiveness of the proposed intelligent object security system are demonstrated experimentally. After benchmarking with the original SIFT algorithm in the same experiments, it is confirmed that the proposed SA-SIFT algorithm is a more suitable method to help surveillance operators monitoring expensive or important objects via intelligent video surveillance.
Style APA, Harvard, Vancouver, ISO itp.
26

Hsieh, Wan-Ching, i 謝皖青. "Using Scale Invariant Feature Transform for Target Identification in High Resolution Optical Image". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/59944080644154827021.

Pełny tekst źródła
Streszczenie:
碩士
國立中央大學
通訊工程研究所碩士在職專班
98
With finer resolution of satellite imagery, people can extract abundant information and develop more applications from it. Nowadays, research institutes and commercial imagery companies all over the world work intensively to develop many image processing techniques. However, satellite imagery still requires correction and value-added processing for further utilization and applications. Because of the difference for imagery collection time, angle and sensors, images at the same location still have different scale, rotation and translation. In such case, feature extraction is the key technique for target identification in different images. In the thesis, we try to use Scale Invariant Feature Transform(SIFT)to extract features and match them in images with different collection conditions. The result shows that SIFT is capable of extracting stable features, and many of them are matched even the images have different scale and distortion.
Style APA, Harvard, Vancouver, ISO itp.
27

Lin, Jia-Hong, i 林家弘. "Combining Scale Invariant Feature Transform with Principal Component Analysis in Face Recognition System". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/14470588190349908465.

Pełny tekst źródła
Streszczenie:
碩士
國立東華大學
資訊工程學系
96
Because the Individual Identification, Access Control, and Security Appliance issues attract much attention, face recognition application is more and more popular. The challenge of face recognition is the performance mainly affected by the variation of illumination, expression, pose, and accessory. And most algorithms proposed in recent years focus on how to conquest these constraints. This paper combines Principal Component Analysis (PCA) and Scale Invariant Feature Transform (SIFT) applying to face recognition application. Firstly, extract the stable feature vectors which are invariant to image scaling and rotation by SIFT. Secondly, apply PCA to project the feature vectors to the new feature space as PCA-SIFT local descriptors and reduce the dimension greatly. Lastly, cluster the local descriptors by K-mean algorithm and combine local and global information of images for face recognition. By the simulation results, PCA-SIFT local descriptor has better performance than other comparative methods and is robust to the variation of accessory and expression. Another advantage of PCA-SIFT local descriptor is the better computation efficiency because PCA reduces the local descriptor dimension greatly.
Style APA, Harvard, Vancouver, ISO itp.
28

Pan, Wei-Zheng, i 潘偉正. "FPGA-Based Implementation for Scale Invariant Feature Transform (SIFT) of Image Recognition Algorithm". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/yjp76f.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣師範大學
電機工程學系
104
To solve the problem of image recognition, which requires plenty of computation time by software, we present a hardware implementation approach of SIFT recognition algorithm to achieve the goal of real time execution, through the use of offline calculation of the Gaussian kernel by software, a mathematical derivation to calculate inverse matrix without using any divisors, realization of image pyramid in parallel, etc. As a result, the system performs well in reducing a number of logic units required and the system frequency is significantly increased. In addition, the CORDIC algorithm is employed to implement not only mathematical functions such as trigonometric functions and square root computation, but also an image gradient histogram successfully by hardware. Consequently, the dominant orientation detection and key point descriptors can be implemented by image gradient histogram. To develop an applicable system, the first step is to apply the software and hardware co-design approach to accelerate functional modules and subsequenty implement the entire system in pure hardware. Besides, the structure of all modules is based on pipeline design. Experimental results demonstrated that the proposed approach has significantly reduced computation time required and efficiently increased maximum system frequency. Most importantly, the execution speed has achieved real time computation for practical applications.
Style APA, Harvard, Vancouver, ISO itp.
29

Xhuan, Wen-Hua, i 宣文華. "Surveillance System Design for Vehicle Tracking and VLSI Architecture Design of Feature Detection in Scale Invariant Feature Transform". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/89117569060195361137.

Pełny tekst źródła
Streszczenie:
碩士
國立中興大學
電機工程學系所
105
Nowadays, automatic visual system with high resolution video stream application is much more common in our life. With huge progress of computer and mobile system, we can use this powerful tool to help us conquer the massive computation of visual analysis and their related applications. Amount the visual system, objects tracking is almost the most basic but complicated subject, the user always wants to find the perfect balance between computation and precision, with more complex application, we used to find more and more new algorithms to solve unexpected problems. In this architecture, in order to increase the accuracy of multi-object tracking, I use the scale invariant feature transform to establish the ID of each registered objects. After matching, all the features in database with searching area, the major problem is to find the robust pairs of those matching key points. With this key points, I can find the precise transform matrix to locate the update set of key points in searching area. Repeat all this rule to find each relocate objects in the new input frame. Because of massive computation, I have to speed up a part of my design to catch up the real time implementation requirement. So I decide to build a hardware version of SIFT feature detection to replace the software one, take advantage of high parallelism of the algorithm of detection itself, the hardware can really reduce much of computation to speed up my original architecture.
Style APA, Harvard, Vancouver, ISO itp.
30

Wu, Jia-Shan, i 吳加山. "Real-time 3-D Object Recognition by Using Scale Invariant Feature Transform and Stereo Vision". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/64211211762544190810.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
機械工程系
96
3-D object recognition and stereo vision are important tasks in computer vision. In this thesis, we use Scale Invariant Feature Transform (SIFT) to search 3-D object features and use GPU to perform the real-time capability. Since SIFT has rotation-invariant, and scale-invariant characteristics, and can handle complex backgrounds, our detector can detect objects of different sizes based on its own unique feature. The corresponding homography is used to calculate the out-plane orientations. In this thesis, we implement the SIFT algorithm to recognize the 3-D objects and also use the stereo vision theorem to determine the distance form the cameras to the object. A robot arm is controlled to point to the object based on the orientations, and depth information of the object.
Style APA, Harvard, Vancouver, ISO itp.
31

Teng, Chtng-Yuan, i 鄧景元. "A study of using Scale Invariant Feature Transform (SIFT) algorithm for radar satellite imagery coregistration". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/84739806805595993983.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣海洋大學
海洋環境資訊學系
98
The time-sequence images are collected on different orbits and incidence angles, results in images are quite different in scale, position and rotation angle. That will be a problem when one tries to locate interest points on different images and match them. Besides, radar reflectance highly depends on the local incidence angle with terrain and the shape of the object; it is harder to match radar imagery. Therefore, how to automatically register radar imagery has become a critical issue. In this thesis, we study the radar imaging geometry, radar imagery characteristics, and differentiations between images like variance in scale and rotation. Scale Invariant Feature Transformation (SIFT) has been proven to match optical imagery with variance in scale, translation and rotation. After a thorough study, we try to use SIFT on radar imagery to get stable features automatically to avoid the influence of imagery shift, scale and speckles in time-sequence images, without user intervention. According to the result via testing SIFT on several pair radar images with different resolution and imaging angle. These shows that SIFT can locate interest points on the roads and building in the image and match them accurately. Therefore, SIFT can register different radar imagery effectively and automatically.
Style APA, Harvard, Vancouver, ISO itp.
32

Lee-YungChen i 陳李永. "Age-Variant Face Recognition Scheme Using Scale Invariant Feature Transform and the Probabilistic Neural Network". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/83926691560305817266.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
電機工程學系碩士在職專班
102
Facing to the aging variation problem, how to improve the correct recognition rate of an automatic face recognition system is an important issue. Most face recognition studies only focus on aging simulation or age estimation. For face recognition system under age variation, it is possible to effectively design a suitable and efficient performance matching a framework model. This thesis mainly discusses the differences caused by age level using the Scale Invariant Feature Transform (SIFT) algorithm. Because it has a high tolerance of noise characteristics, the light and viewing angle has changed. It can be detected and can describe local features of the face images through intensively sampling a local descriptor. Then it uses the Probabilistic Neural Network (PNN) by Bayesian classification decisions to deal with the problem by adjusting the smoothing parameter from the probabilistic density function in order to improve the recognition success rate. Finally, the proposed age-variant face recognition scheme is applied to the FG-NET (Face and Gesture Recognition Research Network) face database and the simulation results demonstrate that the correct recognition rate is indeed improved.
Style APA, Harvard, Vancouver, ISO itp.
33

FAN, SHU-DUAN, i 范恕端. "Automatic Cardiac Contour Tracking in Ultrasound Imaging Using Active Contour Model and Scale Invariant Feature Transform". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/fc878r.

Pełny tekst źródła
Streszczenie:
碩士
國立中正大學
資訊管理學系暨研究所
102
In this study, we combined an active contour model and a scale invariant feature transform for use in cardiac ultrasound imaging tracking. The conventional active contour model is inappropriate for use in cardiac imaging tracking because the mitral and tricuspid rise and fall, leading to poor tracking during conventional methods and excessive convergence in the overall contour during systoles. To amend this deficiency, we proposed adding the scale invariant feature transform to track the heart valve position accurately, thereby preventing excessive convergence below the two heart valves in the dynamic contour. Applying this method resulted in accurate segmentation and tracking results. Experiment shows the segmentation results of our method. And using receiver operating characteristic curve to analysis relative data. Then compared with two other methods, our proposed method is accurate and effective for cardiac imaging tracking.
Style APA, Harvard, Vancouver, ISO itp.
34

Li, Jung-Lin, i 李忠霖. "Stereo Visual Navigation Based on Local Scale-Invariant Feature Transform and Its Nao Embedded System Implementation". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/17978402803821569526.

Pełny tekst źródła
Streszczenie:
碩士
雲林科技大學
電機工程系碩士班
98
Stereo vision navigation is the fundamental functionality of the intelligent robot, so that the intelligent robot can smoothly achieve the features of obstacle avoidance, path planning, map building, and environmental localization. , However, conventional feature detection methods can not provide plenty of feature points that are distributed evenly and can not accomplish the stereo vision navigation. Meanwhile, the intelligent robot often requires some extra ultrasonic or infrared sensor for assistance. In this thesis, Local Scale-Invariant Feature Transform (SIFT) method is proposed to get more and evenly feature points. So accurate 3D environment modeling and elaborate stereo map can be accomplished easily. Experimental results verify the proposed Local SIFT can detect more and reliable feature points. On the other hand, this thesis also implements the simplified stereo vision navigation based on grayscale histogram segmentation onto Nao embedded robot. Implementation results show the simplified vision navigation based on grayscale histogram analysis is simple and efficient.
Style APA, Harvard, Vancouver, ISO itp.
35

PRAKASH, VED. "AN ANALYTICAL APPROACH TOWARDS CONVERSION OF HUMAN SIGNED LANGUAGE TO TEXT USING MODIFIED SCALE INVARIANT FEATURE TRANSFORM (SIFT)". Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14739.

Pełny tekst źródła
Streszczenie:
Sign language is used as a communication medium among deaf & dumb people to convey the message with each other. A person who can talk and hear properly (normal person) cannot communicate with deaf & dumb person unless he/she is familiar with sign language. Same case is applicable when a deaf & dumb person wants to communicate with a normal person or blind person. In order to bridge the gap in communication among deaf & dumb community and normal community, researchers are working to convert hand signs to voice and vice versa to help communication at both ends. A lot of research work has been carried out to automate the process of sign language interpretation with the help of image processing and pattern recognition techniques. The approaches can be broadly classified into “Data -Glove based” and “Vision-based” .Tracking bare hand and operations to detect hand from image frames. The main drawback of this method lies in its huge computational complexity which is further handled with the concept of integral image. The use of integral image for hand detection in viola-Jones method reduces computational complexity and shows satisfactory performance only in a controlled environment. To detect hand in a cluttered background, many researchers used color information and histogram distribution model. Some Local orientation histogram technique is also used for static gesture recognition. These algorithms perform well in a controlled lighting condition, but fails in case of illumination changes, scaling and rotation. To resist illumination changes, Elastic graphs are applied to represent different hand gestures An Analytical Approach towards Conversion of Human SL to Text using Modified SIFT │ xi with local jets of Gabor Filters. Adaboost for wearable computing is insensitive to camera movement and user variance. Their hand tracking is promising, but segmentation is not reliable. Fourier descriptors of binary hand blobs used as feature vector to Radial Basis Function (RBF) classifier for pose classification and combined HMM classifiers for gesture classification. Even though their system achieves good performance, it is not robust against multi variations during hand movement. To overcome the problem of multi variations like rotation, scaling, translation some popular techniques like SIFT, Haar-like features with Adaboost classifiers, Active learning and appearance based approaches are used. However, all these algorithms suffer from the problem of time complexity. To increase the accuracy of the hand gesture recognition system, combined feature selection approach is adopted. My thesis proposes new approach of hand gesture recognition which will recognize sign language gestures in a real time environment. A hybrid feature approach, which combines the advantages of SIFT, Principal Component Analysis, Histogram and they are used as a combined feature set to achieve a good recognition rate. To increase the recognition rate and make the recognition system resilient to view-point variations, the concept of principal component analysis introduced. K-Nearest Neighbors (KNN[11]) is used for hybrid classification of single signed letter. In addition, integration of color detection method is under progress to increase the accuracy further. The performance analysis of the proposed approache is presented along with the experimental results. Comparative study of these methods with other popular techniques shows that the real time efficiency and robustness are better.
Style APA, Harvard, Vancouver, ISO itp.
36

Lin, Hsin-Ping, i 林鑫平. "Detection of early-stage gastric cancer in endoscopy NBI images by using scale-invariant feature transform and support vector machine". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ky7476.

Pełny tekst źródła
Streszczenie:
碩士
國立雲林科技大學
電機工程系
107
In this paper, we use amplified narrow-band imaging (NBI) endoscopic images of the stomach as a data set, there are 66 and 60 images of the training set and test set, respectively. We extract the scale-invariant feature transform (SIFT) feature and find the abnormal region of early gastric cancer. First, we capture the region of interest in an image and filter out the bright and dark blocks. The images segmented into different block sizes, such as 40×40, 50×50, and 60×60, which are partially overlapping. For each block, we determine the SIFT features and then cluster these feature vectors to the bag of visual words (BOVW). Therefore, each image can be represented as a histogram of visual words, which can be used as an input for classifier training. In our experiments, the highest average precision and recall rates reached 85% and 81%, respectively.
Style APA, Harvard, Vancouver, ISO itp.
37

Werkhoven, Shaun. "Improving interest point object recognition". Thesis, 2010. http://hdl.handle.net/1959.13/804109.

Pełny tekst źródła
Streszczenie:
Research Doctorate - Doctor of Philosophy (PhD)
Vision is a fundamental ability for humans. It is essential to a wide range of activities. The ability to see underpins almost all tasks of our day to day life. It is also an ability exercised by people almost effortlessly. Yet, in spite of this it is an ability that is still poorly understood, and has been possible to reproduce in machines only to a very limited degree. This work grows out of a belief that substantial progress is currently being made in understanding visual recognition processes. Advances in algorithms and computer power have recently resulted in clear and measurable progress in recognition performance. Many of the key advances in recognizing objects have related to recognition of key points or interest points. Such image primitives now underpin a wide array of tasks in computer vision such as object recognition, structure from motion, navigation. The object of this thesis is to find ways to improve the performance of such interest point methods. The most popular interest point methods such as SIFT (Scale Invariant Feature Transform) consist of a descriptor, a feature detector and a standard distance metric. This thesis outlines methods whereby all of these elements can be varied to deliver higher performance in some situations. SIFT is a performance standard to which we often refer herein. Typically, the standard Euclidean distance metric is used as a distance measure with interest points. This metric fails to take account of the specific geometric nature of the information in the descriptor vector. By varying this distance measure in a way that accounts for its geometry we show that performance improvements can be obtained. We investigate whether this can be done in an effective and computationally efficient way. Use of sparse detectors or feature points is a mainstay of current interest point methods. Yet such an approach is questionable for class recognition since the most discriminative points may not be selected by the detector. We therefore develop a dense interest point method, whereby interest points are calculated at every point. This requires a low dimensional descriptor to be computationally feasible. Also, we use aggressive approximate nearest neighbour methods. These dense features can be used for both point matching and class recognition, and we provide experimental results for each. These results show that it is competitive with, and in some cases superior to, traditional interest point methods. Having formed dense descriptors, we then have a multi-dimensional quantity at every point. Each of these can be regarded as a new image and descriptors can be applied to them again. Thus we have higher level descriptors – ‘descriptors upon descriptors’. Experimental results are obtained demonstrating that this provides an improvement to matching performance. Standard image databases are used for experiments. The application of these methods to several tasks, such as navigation (or structure from motion) and object class recognition is discussed.
Style APA, Harvard, Vancouver, ISO itp.
38

Γράψα, Ιωάννα. "Ανάπτυξη τεχνικών αντιστοίχισης εικόνων με χρήση σημείων κλειδιών". Thesis, 2012. http://hdl.handle.net/10889/5500.

Pełny tekst źródła
Streszczenie:
Ένα σημαντικό πρόβλημα είναι η αντιστοίχιση εικόνων με σκοπό τη δημιουργία πανοράματος. Στην παρούσα εργασία έχουν χρησιμοποιηθεί αλγόριθμοι που βασίζονται στη χρήση σημείων κλειδιών. Αρχικά στην εργασία βρίσκονται σημεία κλειδιά για κάθε εικόνα που μένουν ανεπηρέαστα από τις αναμενόμενες παραμορφώσεις με την βοήθεια του αλγορίθμου SIFT (Scale Invariant Feature Transform). Έχοντας τελειώσει αυτή τη διαδικασία για όλες τις εικόνες, προσπαθούμε να βρούμε το πρώτο ζευγάρι εικόνων που θα ενωθεί. Για να δούμε αν δύο εικόνες μπορούν να ενωθούν, ακολουθεί ταίριασμα των σημείων κλειδιών τους. Όταν ένα αρχικό σετ αντίστοιχων χαρακτηριστικών έχει υπολογιστεί, πρέπει να βρεθεί ένα σετ που θα παράγει υψηλής ακρίβειας αντιστοίχιση. Αυτό το πετυχαίνουμε με τον αλγόριθμο RANSAC, μέσω του οποίου βρίσκουμε το γεωμετρικό μετασχηματισμό ανάμεσα στις δύο εικόνες, ομογραφία στην περίπτωσή μας. Αν ο αριθμός των κοινών σημείων κλειδιών είναι επαρκής, δηλαδή ταιριάζουν οι εικόνες, ακολουθεί η ένωσή τους. Αν απλώς ενώσουμε τις εικόνες, τότε θα έχουμε σίγουρα κάποια προβλήματα, όπως το ότι οι ενώσεις των δύο εικόνων θα είναι πολύ εμφανείς. Γι’ αυτό, για την εξάλειψη αυτού του προβλήματος, χρησιμοποιούμε τη μέθοδο των Λαπλασιανών πυραμίδων. Επαναλαμβάνεται η παραπάνω διαδικασία μέχρι να δημιουργηθεί το τελικό πανόραμα παίρνοντας κάθε φορά σαν αρχική την τελευταία εικόνα που φτιάξαμε στην προηγούμενη φάση.
Stitching multiple images together to create high resolution panoramas is one of the most popular consumer applications of image registration and blending. At this work, feature-based registration algorithms have been used. The first step is to extract distinctive invariant features from every image which are invariant to image scale and rotation, using SIFT (Scale Invariant Feature Transform) algorithm. After that, we try to find the first pair of images in order to stitch them. To check if two images can be stitched, we match their keypoints (the results from SIFT). Once an initial set of feature correspondences has been computed, we need to find the set that is will produce a high-accuracy alignment. The solution at this problem is RANdom Sample Consensus (RANSAC). Using this algorithm (RANSAC) we find the motion model between the two images (homography). If there is enough number of correspond points, we stitch these images. After that, seams are visible. As solution to this problem is used the method of Laplacian Pyramids. We repeat the above procedure using as initial image the ex panorama which has been created.
Style APA, Harvard, Vancouver, ISO itp.
39

Rosner, Jakub. "Methods of parallelizing selected computer vision algorithms for multi-core graphics processors". Rozprawa doktorska, 2015. https://repolis.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=28390.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Rosner, Jakub. "Methods of parallelizing selected computer vision algorithms for multi-core graphics processors". Rozprawa doktorska, 2015. https://delibra.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=28390.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Prasad, S. "Signal Processing Algorithms For Digital Image Forensics". Thesis, 2008. http://hdl.handle.net/2005/655.

Pełny tekst źródła
Streszczenie:
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.
Style APA, Harvard, Vancouver, ISO itp.
42

Prasad, S. "Signal Processing Algorithms For Digital Image Forensics". Thesis, 2007. https://etd.iisc.ac.in/handle/2005/655.

Pełny tekst źródła
Streszczenie:
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.
Style APA, Harvard, Vancouver, ISO itp.
43

Lourenço, António Miguel. "Techniques for keypoint detection and matching between endoscopic images". Master's thesis, 2009. http://hdl.handle.net/10316/11318.

Pełny tekst źródła
Streszczenie:
The detection and description of local image features is fundamental for different computer vision applications, such as object recognition, image content retrieval, and structure from motion. In the last few years the topic deserved the attention of different authors, with several methods and techniques being currently available in the literature. The SIFT algorithm, proposed in [2], gained particular prominence because of its simplicity and invariance to common image transformations like scaling and rotation. Unfortunately the approach is not able to cope with non-linear image deformations caused by radial lens distortion. The invariance to radial distortion is highly relevant for applications that either require a wide field of view (e.g. panoramic vision), or employ cameras with specific optical arrangements enabling the visualization of small spaces and cavities (e.g. medical endoscopy). One of the objectives of this thesis is to understand how radial distortion impacts the detection and description of keypoints using the SIFT algorithm. We perform a set of experiments that clearly show that distortion affects both the repeatability of detection and the invariance of the SIFT description. These results are analyzed in detail and explained from a theoretical viewpoint. In addition, we propose a novel approach for detection and description of stable local features in images with radial distortion. The detection is carried in a scale-space image representation built using an adaptive gaussian filter that takes into account distortion, and the feature description is performed after implicit gradient correction using the derivative chain rule. Our approach only requires a rough modeling of the radial distortion function and, for moderate levels of distortion, it outperforms the application of the SIFT algorithm after explicit image correction.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii