Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Keypoint-based.

Thèses sur le sujet « Keypoint-based »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 15 meilleures thèses pour votre recherche sur le sujet « Keypoint-based ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Dragon, Ralf [Verfasser]. « Keypoint-Based Object Segmentation / Ralf Dragon ». Aachen : Shaker, 2013. http://d-nb.info/1051573521/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Mingchang. « Keypoint-Based Binocular Distance Measurement for Pedestrian Detection System on Vehicle ». Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31693.

Texte intégral
Résumé :
The Pedestrian Detection System (PDS) has become a significant area of research designed to protect pedestrians. Despite the huge number of research work, the most current PDSs are designed to detect pedestrians without knowing their distances from cars. In fact, a priori knowledge of the distance between a car and pedestrian allows this system to make the appropriate decision in order to avoid collisions. Typical methods of distance measurement require additional equipment (e.g., Radars) which, unfortunately, cannot identify objects. Moreover, traditional stereo-vision methods have poor precision in long-range conditions. In this thesis, we use the keypoint-based feature extraction method to generate the parallax in a binocular vision system in order to measure a detectable object; this is used instead of a disparity map. Our method enhances the tolerance to instability of a moving vehicle; and, it also enables binocular measurement systems to be equipped with a zoom lens and to have greater distance between cameras. In addition, we designed a crossover re-detection and tracking method in order to reinforce the robustness of the system (one camera helps the other reduce detection errors). Our system is able to measure the distance between cars and pedestrians; and, it can also be used efficiently to measure the distance between cars and other objects such as Traffic signs or animals. Through a real word experiment, the system shows a 7.5% margin of error in outdoor and long-range conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Марченко, Ігор Олександрович, Игорь Александрович Марченко, Ihor Oleksandrovych Marchenko, Сергій Олександрович Петров, Сергей Александрович Петров, Serhii Oleksandrovych Petrov et A. A. Pidkuiko. « Usage of keypoint descriptors based algorithms for real-time objects localization ». Thesis, Центральноукраїнський національний технічний університет, 2018. http://essuir.sumdu.edu.ua/handle/123456789/68603.

Texte intégral
Résumé :
In order to achieve high level of security in our everyday life we produce huge amount of data. Significant part of information is presented by videos, sounds or images. A computer is used to extract useful information from raw data [1]. Pattern recognition is branch of computer vision, which allows us to get information from images [2] and videos. Information extraction is crucial problem of pattern recognition. This problem is divided into next branches: object presence; object localization; object classification.
Styles APA, Harvard, Vancouver, ISO, etc.
4

MAZZINI, DAVIDE. « Local Detectors and Descriptors for Object and Scene Recognition ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199003.

Texte intégral
Résumé :
Lo scopo di questa tesi è di studiare due principali categorie di algoritmi per la detection di oggetti e il loro uso in particolari applicazioni. La prima categoria esaminata riguarda approcci basati su Keypoint. Diversi esperimenti comparativi vengono eseguiti all'interno della pipeline standard del modello di test MPEG CDVS e viene proposta una pipeline estesa che fa uso di informazione colore. La seconda categoria di object detectors oggetto di indagine si basa su Reti neurali convoluzionali. In particolare, vengono affrontate due applicazioni di reti neurali convoluzionali per il riconoscimento di oggetti. Il primo riguarda il riconoscimento di loghi commerciali. Due pipeline di classificazione sono progettate e testate su un set di immagini raccolte da Flickr. La prima architettura utilizza una rete pre-addestrata come feature extractor e raggiunge risultati comparabili a quelli di algoritmi basati Keypoint. La seconda architettura si avvale di una rete neurale che supera le performances di metodi stato dell'arte basati su Keypoint. L'altra applicazione esaminata è la categorizzazione di dipinti che consiste nell'associare l'autore, nell'assegnare un dipinto alla scuola o al movimento artistico a cui appartiene, e classificare il genere del dipinto, ad es. paesaggio, ritratto, illustrazione ecc. Per affrontare questo problema, viene proposta una struttura di rete neurale multibranch e multitask che beneficia dell'uso congiunto di approcci basati su keypoint e di features neurali. In entrambe le applicazioni viene anche esaminato l'uso di tecniche di data augmentation per ampliare il training set. In particolare per i dipinti, un algoritmo di trasferimento di stile pittorico basato su reti neurali viene sfruttato per generare quadri sintetici da utilizzare in fase di training.
The aim of this thesis is to study two main categories of algorithms for object detection and their use in particular applications. The first category that is investigated concerns Keypoint-based approaches. Several comparative experiments are performed within the standard testing pipeline of the MPEG CDVS Test Model and an extended pipeline which make use of color information is proposed. The second category of object detectors that is investigated is based on Convolutional Neural Networks. Two applications of Convolutional Neural Networks for object recognition are in particular addressed. The first concerns logo recognition. Two classification pipelines are designed and tested on a real-world dataset of images collected from Flickr. The first architecture makes use of a pre-trained network as feature extractor and it achieves comparable results keypoint based approaches. The second architecture makes use of a tiny end-to-end trained Neural Network that outperformed state-of-the-art keypoint based methods. The other application addressed is Painting Categorization. It consists in associating the author, assigning a painting to the school or art movement it belongs to, and categorizing the genre of the painting, e.g. landscape, portrait, illustration etc. To tackle this problem, a novel multibranch and multitask Neural Network structure is proposed which benefit from joint use of keypoint-based approaches and neural features. In both applications the use of data augmentation techniques to enlarge the training set is also investigated. In particular for paintings, a neural style transfer algorithm is exploited for generating synthetic paintings to be used in training.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bendale, Pashmina Ziparu. « Development and evaluation of a multiscale keypoint detector based on complex wavelets ». Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/252226.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kemp, Neal. « Content-Based Image Retrieval for Tattoos : An Analysis and Comparison of Keypoint Detection Algorithms ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/784.

Texte intégral
Résumé :
The field of biometrics has grown significantly in the past decade due to an increase in interest from law enforcement. Law enforcement officials are interested in adding tattoos alongside irises and fingerprints to their toolbox of biometrics. They often use these biometrics to aid in the identification of victims and suspects. Like facial recognition, tattoos have seen a spike in attention over the past few years. Tattoos, however, have not received as much attention by researchers. This lack of attention towards tattoos stems from the difficulty inherent in matching these tattoos. Such difficulties include image quality, affine transformation, warping of tattoos around the body, and in some cases, excessive body hair covering the tattoo. We will utilize context-based image retrieval to find a tattoo in a database which means using one image to query against a database in order to find similar tattoos. We will focus specifically on the keypoint detection process in computer vision. In addition, we are interested in finding not just exact matches but also similar tattoos. We will conclude that the ORB detector pulls the most relevant features and thus is the best chance for yielding an accurate result from content-based image retrieval for tattoos. However, we will also show that even ORB will not work on its own in a content-based image retrieval system. Other processes will have to be involved in order to return accurate matches. We will give recommendations on next-steps to create a better tattoo retrieval system.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hansen, Peter Ian. « Wide-baseline keypoint detection and matching with wide-angle images for vision based localisation ». Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37667/1/Peter_Hansen_Thesis.pdf.

Texte intégral
Résumé :
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Buck, Robert. « Cluster-Based Salient Object Detection Using K-Means Merging and Keypoint Separation with Rectangular Centers ». DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4631.

Texte intégral
Résumé :
The explosion of internet traffic, advent of social media sites such as Facebook and Twitter, and increased availability of digital cameras has saturated life with images and videos. Never before has it been so important to sift quickly through large amounts of digital information. Salient Object Detection (SOD) is a computer vision topic that finds methods to locate important objects in pictures. SOD has proven to be helpful in numerous applications such as image forgery detection and traffic sign recognition. In this thesis, I outline a novel SOD technique to automatically isolate important objects from the background in images.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Liu, Wen-Pin, et 劉文彬. « A face recognition system based on keypoint exclusion and dual keypoint detection ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/02572728630414645978.

Texte intégral
Résumé :
碩士
銘傳大學
電腦與通訊工程學系碩士班
103
This thesis presents a face recognition system based on keypoint exclusion and dual keypoiont detection. There are three major problems with conventional SIFT (Scale Invariant Feature Transform). (1) It uses single type keypoint detector. For images of small size the number of detected keypoints may be too small and this causes difficulties on image matching. (2) Each keypoint of the test image is matched independently against all keypoints of the training images. This is very time consuming. (3) Only similarities between descriptors are compared and this may still causes some false matches. To increase the number of keypoints, SIFT and FAST (Features from accelerated segment test) keypoints are combined for face image matching. Since there is no corresponding descriptor for FAST detector, the LOG (Laplace of Gaussian) function with Automatic Scale Selection is applied on each FAST keypoint to find proper scales and corresponding SIFT descriptors. On the other hand, based on the similarities between locations of features on human faces, three keypoint exclusion methods (relative location, orientation, and scale) are proposed to eliminate impossible keypoints for further descriptor matching. In this way, the number of false matches can be reduced and hence higher recognition rates can be obtained. On the other hand, matching time can also be reduced. The proposed algorithms are evaluated with the ORL and the Yale face databases. Each database pick 10 person, every person get 10 image. Our proposed method shows significantly improvements on recognition rates over conventional methods.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen, Yi-An, et 陳翊安. « CREAK : Color-based REtinA Keypoint descriptor ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/96ke4e.

Texte intégral
Résumé :
碩士
國立交通大學
多媒體工程研究所
104
Feature matching between images is key to many computer vision applications. Effective Feature matching requires effective feature description. Recently, binary descriptors which are used to describe feature points are attracting increasing attention for their low computational complexity and small memory requirement. However, most binary descriptors are based on intensity comparisons of grayscale images and did not consider color information. In this paper, a novel binary descriptor inspired by human retina is proposed, which considers not only gray values of pixels but also color information. Experimental results show that the proposed feature descriptor spends fewer storage spaces while having better precision level than other popular binary descriptors. Besides, the proposed feature descriptor has the fastest matching speed among all the descriptors under comparison, which makes it suitable for real-time applications.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tvoroshenko, I., et O. Babochkin. « Object identification method based on image keypoint descriptors ». Thesis, 2021. https://openarchive.nure.ua/handle/document/16193.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Syu, Jhih-Wei, et 許智維. « A Keypoint Detector Based on Local Contrast Intensity Images ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/24007133216259018431.

Texte intégral
Résumé :
碩士
逢甲大學
通訊工程所
98
Corners, junctions, and terminals represent prominent local features in images. They are named keypoints. Keypoint detection is a vital step in many applications such as pattern recognition and image registration. The purpose of this thesis is to develop a keypoint detector based on local contrast intensity. Initially an input image is enhanced by a compressive mapping curve and then is transformed to a line-type image by computing absolute local contrast. Subsequently, the local contrast intensity image is applied to the multi-scale and multi-orientation Gaussian second-order derivative filters. The outputs of the filters are used to detect the high curvature points. False keypoints which occur at linear edges or in noisy texture areas are eliminated by an automatic threshold scheme. Finally, the performance of the proposed method was evaluated by both the receiver operating characteristic curve and the recall and precision curve. In addition, it was compared with other methods.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Chen, Kuan-Yu, et 陳冠宇. « Keypoint Selection for Bag-of-Words Based Image Classification ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/72145970597038218165.

Texte intégral
Résumé :
碩士
國立中央大學
資訊管理研究所
100
To search images from large image databases, image retrieval is the major technique to retrieve similar images based on users’ queries. In order to allow users to provide keyword-based queries, automatically annotating images with keywords has been extensively studied. In particular, BOW(Bag Of Words)and SPM(Spatial Pyramid Matching Bag Of Words) are two well-known methods to represent image content as the image feature descriptors. To extract the BOW or SPM features, some keypoints must be detected from each image. However, the number of the detected keypoints is usually very large and some of them are unhelpful to describe the image content, such as background and similar keypoints in different classes. In addition, the computational cost of the vector quantization step heavily depends on the amount of detected keypoints.   Therefore, in this thesis I introduce a new algorithm called IKS(Iterative Keypoint Selection), whose aim is to select representative keypoints for generating the BOW and SPM features. The main concept of IKS is based on identifying some representative keypoints and the distance to select useful keypoints. Specifically, IKS can be divided into IKS1 and IKS2 according to the strategy of identifying representative keypoints. While IKS1 focuses on randomly selecting a keypoint from an image as the representative keypoint, IKS2 uses the k-means to generate the cluster centroids to find the representative keypoints that is closest to them.   Our experimental results based on the Caltech101 and Caltech256 datasets demonstrate that performing keypoint selection by IKS1 and IKS2 can allow the SVM classifier to provide better classification accuracy than the baseline BOW and SPM without keypoint selection. More specifically, IKS2 is more appropriate than IKS1 for image annotation since it performs better than IKS1 when the larger dataset, i.e. Caltech 256, is used.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chen, Ting-Kai, et 陳定楷. « Laser-Based SLAM Using Segmenting Keypoint Detection and B-SHOT Feature ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/53hp8q.

Texte intégral
Résumé :
碩士
國立臺灣大學
電機工程學研究所
106
Simultaneous localization and mapping is a basic and essential part of the autonomous driving research. Environment information gathered from sensors is computed and derives a consistent state of both self-driving car and the environment. Many types of sensor have been utilized in SLAM research, including camera and LiDAR. LiDAR can provide precise depth information, but suffers from the sparsity compared to camera images. Two main methods have been used in LiDAR-based SLAM: direct method and modeling after segmentation. Direct method first extracts interesting points, such as edge points or corner points, to reduce the point cloud size. ICP or Kalman-based filter are then applied to estimate the transformation from frame to frame. Although this method can be adopted in every scenario, the quality of estimation is hard to evaluate. Instead of directly using original point cloud, model-based method first segment point cloud into subsets, and then models each subset with a defined model. Finally, frame-to-frame transformation is estimated from models. However, the model-based method is prone to the environment which has less defined models. In this thesis, a feature-based SLAM algorithm, which is inspired from ORB-SLAM, is proposed on only LiDAR data. In the proposed algorithm, unnecessary points, such as ground points and occluded edge points, are removed by point cloud preprocessing module. Next, the keypoints are selected according to their segment ratio and encoded by B-SHOT feature descriptor. Frame-to-local-map transformation is then estimated based on the B-SHOT feature and refined by iterative closest point algorithm. The experimental results show that the estimated result of the proposed algorithm is consistent in the structural scenarios of ITRI dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Chang, Keng-Hao, et 張耿豪. « 6-DoF Object Pose Estimation Using Keypoint-based Segmentation in Occluded Environment ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/7rd758.

Texte intégral
Résumé :
碩士
國立臺灣大學
電機工程學研究所
107
Object pose estimation has been an important research topic due to its various applications such as robot manipulation and augmented reality. In real-world applications, speed requirement and occlusion handling are two difficulties often addressed. Due to the high price of depth cameras and the lack of research on depth cameras, traditional pose estimation methods emphasize using color images without depth information. With the development of depth cameras, this thesis proposes a 6-DoF object pose estimation method using RGB-D images. Inspired by bottom-up human pose estimation approaches, we proposed a method which considers objects as a collection of components in order to deal with occlusion. In addition to occlusion handling, we apply real-time 2D object recognition method in our method to achieve fast pose estimation. Unlike methods using voting schemes in Hough space, we estimation pose use 3D information directly. The advantage of using 3D information directly is to avoid the multiscale problem for pose estimation. Some experiments are used to test the performance and runtime of the proposed method in occluded environment.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie