Thèses sur le sujet « Keypoint-based »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 15 meilleures thèses pour votre recherche sur le sujet « Keypoint-based ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Dragon, Ralf [Verfasser]. « Keypoint-Based Object Segmentation / Ralf Dragon ». Aachen : Shaker, 2013. http://d-nb.info/1051573521/34.
Texte intégralZhao, Mingchang. « Keypoint-Based Binocular Distance Measurement for Pedestrian Detection System on Vehicle ». Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31693.
Texte intégralМарченко, Ігор Олександрович, Игорь Александрович Марченко, Ihor Oleksandrovych Marchenko, Сергій Олександрович Петров, Сергей Александрович Петров, Serhii Oleksandrovych Petrov et A. A. Pidkuiko. « Usage of keypoint descriptors based algorithms for real-time objects localization ». Thesis, Центральноукраїнський національний технічний університет, 2018. http://essuir.sumdu.edu.ua/handle/123456789/68603.
Texte intégralMAZZINI, DAVIDE. « Local Detectors and Descriptors for Object and Scene Recognition ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199003.
Texte intégralThe aim of this thesis is to study two main categories of algorithms for object detection and their use in particular applications. The first category that is investigated concerns Keypoint-based approaches. Several comparative experiments are performed within the standard testing pipeline of the MPEG CDVS Test Model and an extended pipeline which make use of color information is proposed. The second category of object detectors that is investigated is based on Convolutional Neural Networks. Two applications of Convolutional Neural Networks for object recognition are in particular addressed. The first concerns logo recognition. Two classification pipelines are designed and tested on a real-world dataset of images collected from Flickr. The first architecture makes use of a pre-trained network as feature extractor and it achieves comparable results keypoint based approaches. The second architecture makes use of a tiny end-to-end trained Neural Network that outperformed state-of-the-art keypoint based methods. The other application addressed is Painting Categorization. It consists in associating the author, assigning a painting to the school or art movement it belongs to, and categorizing the genre of the painting, e.g. landscape, portrait, illustration etc. To tackle this problem, a novel multibranch and multitask Neural Network structure is proposed which benefit from joint use of keypoint-based approaches and neural features. In both applications the use of data augmentation techniques to enlarge the training set is also investigated. In particular for paintings, a neural style transfer algorithm is exploited for generating synthetic paintings to be used in training.
Bendale, Pashmina Ziparu. « Development and evaluation of a multiscale keypoint detector based on complex wavelets ». Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/252226.
Texte intégralKemp, Neal. « Content-Based Image Retrieval for Tattoos : An Analysis and Comparison of Keypoint Detection Algorithms ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/784.
Texte intégralHansen, Peter Ian. « Wide-baseline keypoint detection and matching with wide-angle images for vision based localisation ». Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37667/1/Peter_Hansen_Thesis.pdf.
Texte intégralBuck, Robert. « Cluster-Based Salient Object Detection Using K-Means Merging and Keypoint Separation with Rectangular Centers ». DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4631.
Texte intégralLiu, Wen-Pin, et 劉文彬. « A face recognition system based on keypoint exclusion and dual keypoint detection ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/02572728630414645978.
Texte intégral銘傳大學
電腦與通訊工程學系碩士班
103
This thesis presents a face recognition system based on keypoint exclusion and dual keypoiont detection. There are three major problems with conventional SIFT (Scale Invariant Feature Transform). (1) It uses single type keypoint detector. For images of small size the number of detected keypoints may be too small and this causes difficulties on image matching. (2) Each keypoint of the test image is matched independently against all keypoints of the training images. This is very time consuming. (3) Only similarities between descriptors are compared and this may still causes some false matches. To increase the number of keypoints, SIFT and FAST (Features from accelerated segment test) keypoints are combined for face image matching. Since there is no corresponding descriptor for FAST detector, the LOG (Laplace of Gaussian) function with Automatic Scale Selection is applied on each FAST keypoint to find proper scales and corresponding SIFT descriptors. On the other hand, based on the similarities between locations of features on human faces, three keypoint exclusion methods (relative location, orientation, and scale) are proposed to eliminate impossible keypoints for further descriptor matching. In this way, the number of false matches can be reduced and hence higher recognition rates can be obtained. On the other hand, matching time can also be reduced. The proposed algorithms are evaluated with the ORL and the Yale face databases. Each database pick 10 person, every person get 10 image. Our proposed method shows significantly improvements on recognition rates over conventional methods.
Chen, Yi-An, et 陳翊安. « CREAK : Color-based REtinA Keypoint descriptor ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/96ke4e.
Texte intégral國立交通大學
多媒體工程研究所
104
Feature matching between images is key to many computer vision applications. Effective Feature matching requires effective feature description. Recently, binary descriptors which are used to describe feature points are attracting increasing attention for their low computational complexity and small memory requirement. However, most binary descriptors are based on intensity comparisons of grayscale images and did not consider color information. In this paper, a novel binary descriptor inspired by human retina is proposed, which considers not only gray values of pixels but also color information. Experimental results show that the proposed feature descriptor spends fewer storage spaces while having better precision level than other popular binary descriptors. Besides, the proposed feature descriptor has the fastest matching speed among all the descriptors under comparison, which makes it suitable for real-time applications.
Tvoroshenko, I., et O. Babochkin. « Object identification method based on image keypoint descriptors ». Thesis, 2021. https://openarchive.nure.ua/handle/document/16193.
Texte intégralSyu, Jhih-Wei, et 許智維. « A Keypoint Detector Based on Local Contrast Intensity Images ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/24007133216259018431.
Texte intégral逢甲大學
通訊工程所
98
Corners, junctions, and terminals represent prominent local features in images. They are named keypoints. Keypoint detection is a vital step in many applications such as pattern recognition and image registration. The purpose of this thesis is to develop a keypoint detector based on local contrast intensity. Initially an input image is enhanced by a compressive mapping curve and then is transformed to a line-type image by computing absolute local contrast. Subsequently, the local contrast intensity image is applied to the multi-scale and multi-orientation Gaussian second-order derivative filters. The outputs of the filters are used to detect the high curvature points. False keypoints which occur at linear edges or in noisy texture areas are eliminated by an automatic threshold scheme. Finally, the performance of the proposed method was evaluated by both the receiver operating characteristic curve and the recall and precision curve. In addition, it was compared with other methods.
Chen, Kuan-Yu, et 陳冠宇. « Keypoint Selection for Bag-of-Words Based Image Classification ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/72145970597038218165.
Texte intégral國立中央大學
資訊管理研究所
100
To search images from large image databases, image retrieval is the major technique to retrieve similar images based on users’ queries. In order to allow users to provide keyword-based queries, automatically annotating images with keywords has been extensively studied. In particular, BOW(Bag Of Words)and SPM(Spatial Pyramid Matching Bag Of Words) are two well-known methods to represent image content as the image feature descriptors. To extract the BOW or SPM features, some keypoints must be detected from each image. However, the number of the detected keypoints is usually very large and some of them are unhelpful to describe the image content, such as background and similar keypoints in different classes. In addition, the computational cost of the vector quantization step heavily depends on the amount of detected keypoints. Therefore, in this thesis I introduce a new algorithm called IKS(Iterative Keypoint Selection), whose aim is to select representative keypoints for generating the BOW and SPM features. The main concept of IKS is based on identifying some representative keypoints and the distance to select useful keypoints. Specifically, IKS can be divided into IKS1 and IKS2 according to the strategy of identifying representative keypoints. While IKS1 focuses on randomly selecting a keypoint from an image as the representative keypoint, IKS2 uses the k-means to generate the cluster centroids to find the representative keypoints that is closest to them. Our experimental results based on the Caltech101 and Caltech256 datasets demonstrate that performing keypoint selection by IKS1 and IKS2 can allow the SVM classifier to provide better classification accuracy than the baseline BOW and SPM without keypoint selection. More specifically, IKS2 is more appropriate than IKS1 for image annotation since it performs better than IKS1 when the larger dataset, i.e. Caltech 256, is used.
Chen, Ting-Kai, et 陳定楷. « Laser-Based SLAM Using Segmenting Keypoint Detection and B-SHOT Feature ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/53hp8q.
Texte intégral國立臺灣大學
電機工程學研究所
106
Simultaneous localization and mapping is a basic and essential part of the autonomous driving research. Environment information gathered from sensors is computed and derives a consistent state of both self-driving car and the environment. Many types of sensor have been utilized in SLAM research, including camera and LiDAR. LiDAR can provide precise depth information, but suffers from the sparsity compared to camera images. Two main methods have been used in LiDAR-based SLAM: direct method and modeling after segmentation. Direct method first extracts interesting points, such as edge points or corner points, to reduce the point cloud size. ICP or Kalman-based filter are then applied to estimate the transformation from frame to frame. Although this method can be adopted in every scenario, the quality of estimation is hard to evaluate. Instead of directly using original point cloud, model-based method first segment point cloud into subsets, and then models each subset with a defined model. Finally, frame-to-frame transformation is estimated from models. However, the model-based method is prone to the environment which has less defined models. In this thesis, a feature-based SLAM algorithm, which is inspired from ORB-SLAM, is proposed on only LiDAR data. In the proposed algorithm, unnecessary points, such as ground points and occluded edge points, are removed by point cloud preprocessing module. Next, the keypoints are selected according to their segment ratio and encoded by B-SHOT feature descriptor. Frame-to-local-map transformation is then estimated based on the B-SHOT feature and refined by iterative closest point algorithm. The experimental results show that the estimated result of the proposed algorithm is consistent in the structural scenarios of ITRI dataset.
Chang, Keng-Hao, et 張耿豪. « 6-DoF Object Pose Estimation Using Keypoint-based Segmentation in Occluded Environment ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/7rd758.
Texte intégral國立臺灣大學
電機工程學研究所
107
Object pose estimation has been an important research topic due to its various applications such as robot manipulation and augmented reality. In real-world applications, speed requirement and occlusion handling are two difficulties often addressed. Due to the high price of depth cameras and the lack of research on depth cameras, traditional pose estimation methods emphasize using color images without depth information. With the development of depth cameras, this thesis proposes a 6-DoF object pose estimation method using RGB-D images. Inspired by bottom-up human pose estimation approaches, we proposed a method which considers objects as a collection of components in order to deal with occlusion. In addition to occlusion handling, we apply real-time 2D object recognition method in our method to achieve fast pose estimation. Unlike methods using voting schemes in Hough space, we estimation pose use 3D information directly. The advantage of using 3D information directly is to avoid the multiscale problem for pose estimation. Some experiments are used to test the performance and runtime of the proposed method in occluded environment.