To see the other types of publications on this topic, follow the link: Keypoint-based.

Journal articles on the topic 'Keypoint-based'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Keypoint-based.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guan, Genliang, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, and David Dagan Feng. "Keypoint-Based Keyframe Selection." IEEE Transactions on Circuits and Systems for Video Technology 23, no. 4 (April 2013): 729–34. http://dx.doi.org/10.1109/tcsvt.2012.2214871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tavakol, Ali, and Mohammad Soltanian. "Fast Feature-Based Template Matching, Based on Efficient Keypoint Extraction." Advanced Materials Research 341-342 (September 2011): 798–802. http://dx.doi.org/10.4028/www.scientific.net/amr.341-342.798.

Full text
Abstract:
In order to improve the performance of feature-based template matching techniques, several research papers have been published. Real-time applications require the computational complexity of keypoint matching algorithms to be as low as possible. In this paper, we propose a method to improve the keypoint detection stage of feature-based template matching algorithms. Our experiment results show that the proposed method outperforms keypoint matching techniques in terms of speed, keypoint stability and repeatability.
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Mingfei, Yinghua Wang, Hongwei Liu, and Penghui Wang. "PolSAR Ship Detection Based on a SIFT-like PolSAR Keypoint Detector." Remote Sensing 14, no. 12 (June 17, 2022): 2900. http://dx.doi.org/10.3390/rs14122900.

Full text
Abstract:
The detection of ships on the open sea is an important issue for both military and civilian fields. As an active microwave imaging sensor, synthetic aperture radar (SAR) is a useful device in marine supervision. To extract small and weak ships precisely in the marine areas, polarimetric synthetic aperture radar (PolSAR) data have been used more and more widely. We propose a new PolSAR ship detection method which is based on a keypoint detector, referred to as a PolSAR-SIFT keypoint detector, and a patch variation indicator in this paper. The PolSAR-SIFT keypoint detector proposed in this paper is inspired by the SAR-SIFT keypoint detector. We improve the gradient definition in the SAR-SIFT keypoint detector to adapt to the properties of PolSAR data by defining a new gradient based on the distance measurement of polarimetric covariance matrices. We present the application of PolSAR-SIFT keypoint detector to the detection of ship targets in PolSAR data by combining the PolSAR-SIFT keypoint detector with the patch variation indicator we proposed before. The keypoints extracted by the PolSAR-SIFT keypoint detector are usually located in regions with corner structures, which are likely to be ship regions. Then, the patch variation indicator is used to characterize the context information of the extracted keypoints, and the keypoints located on the sea area are filtered out by setting a constant false alarm rate threshold for the patch variation indicator. Finally, a patch centered on each filtered keypoint is selected. Then, the detection statistics in the patch are calculated. The detection statistics are binarized according to the local threshold set by the detection statistic value of the keypoint to complete the ship detection. Experiments on three data sets obtained from the RADARSAT-2 and AIRSAR quad-polarization data demonstrate that the proposed detector is effective for ship detection.
APA, Harvard, Vancouver, ISO, and other styles
4

Boonsivanon, Krittachai, and Worawat Sa-Ngiamvibool. "A SIFT Description Approach for Non-Uniform Illumination and Other Invariants." Ingénierie des systèmes d information 26, no. 6 (December 27, 2021): 533–39. http://dx.doi.org/10.18280/isi.260603.

Full text
Abstract:
The new improvement keypoint description technique of image-based recognition for rotation, viewpoint and non-uniform illumination situations is presented. The technique is relatively simple based on two procedures, i.e., the keypoint detection and the keypoint description procedure. The keypoint detection procedure is based on the SIFT approach, Top-Hat filtering, morphological operations and average filtering approach. Where this keypoint detection procedure can segment the targets from uneven illumination particle images. While the keypoint description procedures are described and implemented using the Hu moment invariants. Where the central moments are being unchanged under image translations. The sensitivity, accuracy and precision rate of data sets were evaluated and compared. The data set are provided by color image database with variants uniform and non-uniform illumination, viewpoint and rotation changes. The evaluative results show that the approach is superior to the other SIFTs in terms of uniform illumination, non-uniform illumination and other situations. Additionally, the paper demonstrates the high sensitivity of 100%, high accuracy of 83.33% and high precision rate of 80.00%. Comparisons to other SIFT approaches are also included.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Zhonghua, Guosheng Lin, and Jianfei Cai. "Keypoint based weakly supervised human parsing." Image and Vision Computing 91 (November 2019): 103801. http://dx.doi.org/10.1016/j.imavis.2019.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ding, Xintao, Qingde Li, Yongqiang Cheng, Jinbao Wang, Weixin Bian, and Biao Jie. "Local keypoint-based Faster R-CNN." Applied Intelligence 50, no. 10 (April 28, 2020): 3007–22. http://dx.doi.org/10.1007/s10489-020-01665-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cevahir, Ali, and Junji Torii. "High Performance Online Image Search with GPUs on Large Image Databases." International Journal of Multimedia Data Engineering and Management 4, no. 3 (July 2013): 24–41. http://dx.doi.org/10.4018/jmdem.2013070102.

Full text
Abstract:
The authors propose an online image search engine based on local image keypoint matching with GPU support. State-of-the-art models are based on bag-of-visual-words, which is an analogy of textual search for visual search. In this work, thanks to the vector computation power of the GPU, the authors utilize real values of keypoint descriptors and realize real-time search at keypoint level. By keeping the identities of each keypoint, closest keypoints are accurately retrieved. Image search has different characteristics than textual search. The authors implement one-to-one keypoint matching, which is more natural for images. The authors utilize GPUs for every basic step. To demonstrate practicality of GPU-extended image search, the authors also present a simple bag-of-visual-words search technique with full-text search engines. The authors explain how to implement one-to-one keypoint matching with text search engine. Proposed methods lead to drastic performance and precision improvement, which is demonstrated on datasets of different sizes.
APA, Harvard, Vancouver, ISO, and other styles
8

Feng, Lu, Quan Fu, Xiang Long, and Zhuang Zhi Wu. "Keypoint Recognition for 3D Head Model Using Geometry Image." Applied Mechanics and Materials 654 (October 2014): 287–90. http://dx.doi.org/10.4028/www.scientific.net/amm.654.287.

Full text
Abstract:
This paper presents a novel and efficient 3D head model keypoint recognition framework based on the geometry image. Based on conformal mapping and diffusion scale space, our method can utilize the SIFT method to extract and describe the keypoint of 3D head model. We use this framework to identify the keypoint of the human head. The experiments shows the robust and efficiency of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Paek, Kangho, Min Yao, Zhongwei Liu, and Hun Kim. "Log-Spiral Keypoint: A Robust Approach toward Image Patch Matching." Computational Intelligence and Neuroscience 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/457495.

Full text
Abstract:
Matching of keypoints across image patches forms the basis of computer vision applications, such as object detection, recognition, and tracking in real-world images. Most of keypoint methods are mainly used to match the high-resolution images, which always utilize an image pyramid for multiscale keypoint detection. In this paper, we propose a novel keypoint method to improve the matching performance of image patches with the low-resolution and small size. The location, scale, and orientation of keypoints are directly estimated from an original image patch using a Log-Spiral sampling pattern for keypoint detection without consideration of image pyramid. A Log-Spiral sampling pattern for keypoint description and two bit-generated functions are designed for generating a binary descriptor. Extensive experiments show that the proposed method is more effective and robust than existing binary-based methods for image patch matching.
APA, Harvard, Vancouver, ISO, and other styles
10

Morgacheva, A. I., V. A. Kulikov, and V. P. Kosykh. "DYNAMIC KEYPOINT-BASED ALGORITHM OF OBJECT TRACKING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 79–82. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-79-2017.

Full text
Abstract:
The model of the observed object plays the key role in the task of object tracking. Models as a set of image parts, in particular, keypoints, is more resistant to the changes in shape, texture, angle of view, because local changes apply only to specific parts of the object. On the other hand, any model requires updating as the appearance of the object changes with respect to the camera. In this paper, we propose a dynamic (time-varying) model, based on a set of keypoints. To update the data this model uses the algorithm of rating keypoints and the decision rule, based on a Function of Rival Similarity (FRiS). As a result, at the test set of image sequences the improvement was achieved on average by 9.3% compared to the original algorithm. On some sequences, the improvement was 16% compared to the original algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

ATİK, Muhammed Enes, Abdullah Harun İNCEKARA, Batuhan SARITÜRK, Ozan ÖZTÜRK, Zaide DURAN, and Dursun Zafer ŞEKER. "3D Object Recognition with Keypoint Based Algorithms." International Journal of Environment and Geoinformatics 6, no. 1 (April 12, 2019): 139–42. http://dx.doi.org/10.30897/ijegeo.551747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gong, Caili, Yong Zhang, Yongfeng Wei, Xinyu Du, Lide Su, and Zhi Weng. "Multicow pose estimation based on keypoint extraction." PLOS ONE 17, no. 6 (June 3, 2022): e0269259. http://dx.doi.org/10.1371/journal.pone.0269259.

Full text
Abstract:
Automatic estimation of the poses of dairy cows over a long period can provide relevant information regarding their status and well-being in precision farming. Due to appearance similarity, cow pose estimation is challenging. To monitor the health of dairy cows in actual farm environments, a multicow pose estimation algorithm was proposed in this study. First, a monitoring system was established at a dairy cow breeding site, and 175 surveillance videos of 10 different cows were used as raw data to construct object detection and pose estimation data sets. To achieve the detection of multiple cows, the You Only Look Once (YOLO)v4 model based on CSPDarkNet53 was built and fine-tuned to output the bounding box for further pose estimation. On the test set of 400 images including single and multiple cows throughout the whole day, the average precision (AP) reached 94.58%. Second, the keypoint heatmaps and part affinity field (PAF) were extracted to match the keypoints of the same cow based on the real-time multiperson 2D pose detection model. To verify the performance of the algorithm, 200 single-object images and 200 dual-object images with occlusions were tested under different light conditions. The test results showed that the AP of leg keypoints was the highest, reaching 91.6%, regardless of day or night and single cows or double cows. This was followed by the AP values of the back, neck and head, sequentially. The AP of single cow pose estimation was 85% during the day and 78.1% at night, compared to double cows with occlusion, for which the values were 74.3% and 71.6%, respectively. The keypoint detection rate decreased when the occlusion was severe. However, in actual cow breeding sites, cows are seldom strongly occluded. Finally, a pose classification network was built to estimate the three typical poses (standing, walking and lying) of cows based on the extracted cow skeleton in the bounding box, achieving precision of 91.67%, 92.97% and 99.23%, respectively. The results showed that the algorithm proposed in this study exhibited a relatively high detection rate. Therefore, the proposed method can provide a theoretical reference for animal pose estimation in large-scale precision livestock farming.
APA, Harvard, Vancouver, ISO, and other styles
13

Pieropan, Alessandro, Niklas Bergström, Masatoshi Ishikawa, and Hedvig Kjellström. "Robust and adaptive keypoint-based object tracking." Advanced Robotics 30, no. 4 (February 16, 2016): 258–69. http://dx.doi.org/10.1080/01691864.2015.1129360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jia Yongjie, 贾勇杰, 熊风光 Xiong Fengguang, 韩燮 Han Xie, and 况立群 Kuang Liqun. "Multi-Scale Keypoint Detection Based on SHOT." Laser & Optoelectronics Progress 55, no. 7 (2018): 071013. http://dx.doi.org/10.3788/lop55.071013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Agier, R., S. Valette, R. Kéchichian, L. Fanton, and R. Prost. "Hubless keypoint-based 3D deformable groupwise registration." Medical Image Analysis 59 (January 2020): 101564. http://dx.doi.org/10.1016/j.media.2019.101564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jiao, Runzhi, Qingsong Wang, Tao Lai, and Haifeng Huang. "Multi-Hypothesis Topological Isomorphism Matching Method for Synthetic Aperture Radar Images with Large Geometric Distortion." Remote Sensing 13, no. 22 (November 17, 2021): 4637. http://dx.doi.org/10.3390/rs13224637.

Full text
Abstract:
The dramatic undulations of a mountainous terrain will introduce large geometric distortions in each Synthetic Aperture Radar (SAR) image with different look angles, resulting in a poor registration performance. To this end, this paper proposes a multi-hypothesis topological isomorphism matching method for SAR images with large geometric distortions. The method includes the Ridge-Line Keypoint Detection (RLKD) and Multi-Hypothesis Topological Isomorphism Matching (MHTIM). Firstly, based on the analysis of the ridge structure, a ridge keypoint detection module and a keypoint similarity description method are designed, which aim to quickly produce a small number of stable matching keypoint pairs under large look angle differences and large terrain undulations. The keypoint pairs are further fed into the MHTIM module. Subsequently, the MHTIM method is proposed, which uses the stability and isomorphism of the topological structure of the keypoint set under different perspectives to generate a variety of matching hypotheses, and iteratively achieves the keypoint matching. This method uses both local and global geometric relationships between two keypoints, hence it achieving better performance compared with traditional methods. We tested our approach on both simulated and real mountain SAR images with different look angles and different elevation ranges. The experimental results demonstrate the effectiveness and stable matching performance of our approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Shaoyan, Tao Wang, Congyan Lang, Songhe Feng, and Yi Jin. "Graph-based visual odometry for VSLAM." Industrial Robot: An International Journal 45, no. 5 (August 20, 2018): 679–87. http://dx.doi.org/10.1108/ir-04-2018-0061.

Full text
Abstract:
Purpose Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information makes them sensitive to various environmental perturbations. The purpose of this paper is to propose a novel graph-based method that aims to improve matching accuracy by fully exploiting the structure information. Design/methodology/approach Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner. Findings The authors compare it with several state-of-the-art visual simultaneous localization and mapping algorithms on three datasets. Experimental results reveal that the ORB-G algorithm provides more accurate and robust trajectories in general. Originality/value Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.
APA, Harvard, Vancouver, ISO, and other styles
18

Huang, Chen-Wei, and Jian-Jiun Ding. "Adaptive Superpixel-Based Disparity Estimation Algorithm Using Plane Information and Disparity Refining Mechanism in Stereo Matching." Symmetry 14, no. 5 (May 15, 2022): 1005. http://dx.doi.org/10.3390/sym14051005.

Full text
Abstract:
The motivation of this paper is to address the limitations of the conventional keypoint-based disparity estimation methods. Conventionally, disparity estimation is usually based on the local information of keypoints. However, keypoints may distribute sparsely in the smooth region, and keypoints with the same descriptors may appear in a symmetric pattern. Therefore, conventional keypoint-based disparity estimation methods may have limited performance in smooth and symmetric regions. The proposed algorithm is superpixel-based. Instead of performing keypoint matching, both keypoint and semiglobal information are applied to determine the disparity in the proposed algorithm. Since the local information of keypoints and the semi-global information of the superpixel are both applied, the accuracy of disparity estimation can be improved, especially for smooth and symmetric regions. Moreover, to address the non-uniform distribution problem of keypoints, a disparity refining mechanism based on the similarity and the distance of neighboring superpixels is applied to correct the disparity of the superpixel with no or few keypoints. The experiments show that the disparity map generated by the proposed algorithm has a lower matching error rate than that generated by other methods.
APA, Harvard, Vancouver, ISO, and other styles
19

B.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.

Full text
Abstract:
This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error (MSE) of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Lian, and Zhangping Lu. "A New Scheme for Keypoint Detection and Description." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/310704.

Full text
Abstract:
The keypoint detection and its description are two critical aspects of local keypoints matching which is vital in some computer vision and pattern recognition applications. This paper presents a new scale-invariant and rotation-invariant detector and descriptor, coined, respectively, DDoG and FBRK. At first the Hilbert curve scanning is applied to converting a two-dimensional (2D) digital image into a one-dimensional (1D) gray-level sequence. Then, based on the 1D image sequence, an approximation of DoG detector using second-order difference-of-Gaussian function is proposed. Finally, a new fast binary ratio-based keypoint descriptor is proposed. That is achieved by using the ratio-relationships of the keypoint pixel value with other pixel of values around the keypoint in scale space. Experimental results show that the proposed methods can be computed much faster and approximate or even outperform the existing methods with respect to performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Ruiqing, Juncai Zhu, and Xiaoping Rao. "Murine Motion Behavior Recognition Based on DeepLabCut and Convolutional Long Short-Term Memory Network." Symmetry 14, no. 7 (June 29, 2022): 1340. http://dx.doi.org/10.3390/sym14071340.

Full text
Abstract:
Murine behavior recognition is widely used in biology, neuroscience, pharmacology, and other aspects of research, and provides a basis for judging the psychological and physiological state of mice. To solve the problem whereby traditional behavior recognition methods only model behavioral changes in mice over time or space, we propose a symmetrical algorithm that can capture spatiotemporal information based on behavioral changes. The algorithm first uses the improved DeepLabCut keypoint detection algorithm to locate the nose, left ear, right ear, and tail root of the mouse, and then uses the ConvLSTM network to extract spatiotemporal information from the keypoint feature map sequence to classify five behaviors of mice: walking straight, resting, grooming, standing upright, and turning. We developed a murine keypoint detection and behavior recognition dataset, and experiments showed that the method achieved a percentage of correct keypoints (PCK) of 87±1% at three scales and against four backgrounds, while the classification accuracy for the five kinds of behaviors reached 93±1%. The proposed method is thus accurate for keypoint detection and behavior recognition, and is a useful tool for murine motion behavior recognition.
APA, Harvard, Vancouver, ISO, and other styles
22

Qian, Shenhan, Dongze Lian, Binqiang Zhao, Tong Liu, Bohui Zhu, Hai Li, and Shenghua Gao. "KGDet: Keypoint-Guided Fashion Detection." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 2449–57. http://dx.doi.org/10.1609/aaai.v35i3.16346.

Full text
Abstract:
Locating and classifying clothes, usually referred to as clothing detection, is a fundamental task in fashion analysis. Motivated by the strong structural characteristics of clothes, we pursue a detection method enhanced by clothing keypoints, which is a compact and effective representation of structures. To incorporate the keypoint cues into clothing detection, we design a simple yet effective Keypoint-Guided clothing Detector, named KGDet. Such a detector can fully utilize information provided by keypoints with the following two aspects: i) integrating local features around keypoints to benefit both classification and regression; ii) generating accurate bounding boxes from keypoints. To effectively incorporate local features , two alternative modules are proposed. One is a multi-column keypoint-encoding-based feature aggregation module; the other is a keypoint-selection-based feature aggregation module. With either of the above modules as a bridge, a cascade strategy is introduced to refine detection performance progressively. Thanks to the keypoints, our KGDet obtains superior performance on the DeepFashion2 dataset and the FLD dataset with high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Sejun, Sungjae Kang, Hyomin Choi, Seong-Soo Kim, and Kisung Seo. "Valid Keypoint Augmentation based Occluded Person Re-Identification." Transactions of The Korean Institute of Electrical Engineers 71, no. 7 (July 31, 2022): 1002–7. http://dx.doi.org/10.5370/kiee.2022.71.7.1002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Đăng Khuyên, Phan, Nguyễn Phi Bằng, and Đặng Thành Trung. "Enhance robustness for watermarking based on keypoint features." Journal of Science, Educational Science 60, no. 7A (2015): 169–79. http://dx.doi.org/10.18173/2354-1075.2015-0064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gao, Hong-bo, Hong-yu Wang, and Xiao-kai Liu. "A Keypoint Matching Method Based on Hierarchical Learning." Journal of Electronics & Information Technology 35, no. 11 (February 21, 2014): 2751–57. http://dx.doi.org/10.3724/sp.j.1146.2013.00347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

EMAM, Mahmoud, Qi HAN, Liyang YU, and Hongli ZHANG. "A Keypoint-Based Region Duplication Forgery Detection Algorithm." IEICE Transactions on Information and Systems E99.D, no. 9 (2016): 2413–16. http://dx.doi.org/10.1587/transinf.2016edl8024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Lian. "New Keypoint Detector and Descriptor Based on SIFT." Journal of Information and Computational Science 12, no. 14 (September 20, 2015): 5279–90. http://dx.doi.org/10.12733/jics20106500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lan Jianxia, 兰渐霞, 王泽勇 Wang Zeyong, 李金龙 Li Jinlong, 袁萌 Yuan Meng, and 高晓蓉 Gao Xiaorong. "Keypoint Extraction Algorithm Based on Normal Shape Index." Laser & Optoelectronics Progress 57, no. 16 (2020): 161016. http://dx.doi.org/10.3788/lop57.161016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bellavia, F., D. Tegolo, and C. Valenti. "Keypoint descriptor matching with context-based orientation estimation." Image and Vision Computing 32, no. 9 (September 2014): 559–67. http://dx.doi.org/10.1016/j.imavis.2014.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Prakash, Choudhary Shyam, Hari Om, Sushila Maheshkar, Vikas Maheshkar, and Tao Song. "Keypoint-based passive method for image manipulation detection." Cogent Engineering 5, no. 1 (January 1, 2018): 1523346. http://dx.doi.org/10.1080/23311916.2018.1523346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Erbing, Fei Chen, Meiqing Wang, Hang Cheng, and Rong Liu. "Local Property of Depth Information in 3D Images and Its Application in Feature Matching." Mathematics 11, no. 5 (February 26, 2023): 1154. http://dx.doi.org/10.3390/math11051154.

Full text
Abstract:
In image registration or image matching, the feature extracted by using the traditional methods does not include the depth information which may lead to a mismatch of keypoints. In this paper, we prove that when the camera moves, the ratio of the depth difference of a keypoint and its neighbor pixel before and after the camera movement approximates a constant. That means the depth difference of a keypoint and its neighbor pixel after normalization is invariant to the camera movement. Based on this property, all the depth differences of a keypoint and its neighbor pixels constitute a local depth-based feature, which can be used as a supplement of the traditional feature. We combine the local depth-based feature with the SIFT feature descriptor to form a new feature descriptor, and the experimental results show the feasibility and effectiveness of the new feature descriptor.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, L., L. Han, H. Cao, and M. Liu. "A SELF-SUPERVISED KEYPOINT DETECTION NETWORK FOR MULTIMODAL REMOTE SENSING IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 609–15. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-609-2022.

Full text
Abstract:
Abstract. Currently, multimodal remote sensing images have complex geometric and radiometric distortions, which are beyond the reach of classical hand-crafted feature-based matching. Although keypoint matching methods have been developed in recent decades, most manual and deep learning-based techniques cannot effectively extract highly repeatable keypoints. To address that, we design a Siamese network with self-supervised training to generate similar keypoint feature maps between multimodal images, and detect highly repeatable keypoints by computing local spatial- and channel-domain peaks of the feature maps. We exploit the confidence level of keypoints to enable the detection network to evaluate potential keypoints with end-to-end trainability. Unlike most trainable detectors, it does not require the generation of pseudo-ground truth points. In the experiments, the proposed method is evaluated using various SAR and optical images covering different scenes. The results prove its superior keypoint detection performance compared with current state-of-art matching methods based on keypoints.
APA, Harvard, Vancouver, ISO, and other styles
33

MAKAR, MINA, SAM S. TSAI, VIJAY CHANDRASEKHAR, DAVID CHEN, and BERND GIROD. "INTERFRAME CODING OF CANONICAL PATCHES FOR LOW BIT-RATE MOBILE AUGMENTED REALITY." International Journal of Semantic Computing 07, no. 01 (March 2013): 5–24. http://dx.doi.org/10.1142/s1793351x13400011.

Full text
Abstract:
Local features are widely used for content-based image retrieval and augmented reality applications. Typically, feature descriptors are calculated from the gradients of a canonical patch around a repeatable keypoint in the image. In this paper, we propose a temporally coherent keypoint detector and design efficient interframe predictive coding techniques for canonical patches and keypoint locations. In the proposed system, we strive to transmit each patch with as few bits as possible by simply modifying a previously transmitted patch. This enables server-based mobile augmented reality where a continuous stream of salient information, sufficient for image-based retrieval and localization, can be sent over a wireless link at a low bit-rate. Experimental results show that our technique achieves a similar image matching performance at 1/15 of the bit-rate when compared to detecting keypoints independently frame-by-frame and allows performing streaming mobile augmented reality at low bit-rates of about 20–50 kbps, practical for today's wireless links.
APA, Harvard, Vancouver, ISO, and other styles
34

LI, JING, TAO YANG, QUAN PAN, YONG-MEI CHENG, and JUN HOU. "A NOVEL ALGORITHM FOR SPEEDING UP KEYPOINT DETECTION AND MATCHING." International Journal of Image and Graphics 08, no. 04 (October 2008): 643–61. http://dx.doi.org/10.1142/s0219467808003283.

Full text
Abstract:
This work proposes a novel keypoint detector called QSIF (Quality and Spatial based Invariant Feature Detector). The primary contributions include: (1) a multilevel box filter is used to build the image scales efficiently and precisely, (2) by examining pixels in quality and spatial space simultaneously, QSIF can directly locate the keypoints without scale space extrema detection in the entire image spatial space, (3) QSIF can precisely control the number of output keypoints while maintaining almost the same repeatability of keypoint detection. This characteristic is essential in many real-time application fields. Extensive experimental results with images under scale, rotation, viewpoint and illumination changes demonstrate that the proposed QSIF has a stable and satisfied repeatability, and it can greatly speed up the keypoint detect and matching.
APA, Harvard, Vancouver, ISO, and other styles
35

Mundt, Marion, Zachery Born, Molly Goldacre, and Jacqueline Alderson. "Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose." Sensors 23, no. 1 (December 21, 2022): 78. http://dx.doi.org/10.3390/s23010078.

Full text
Abstract:
The adoption of computer vision pose estimation approaches, used to identify keypoint locations which are intended to reflect the necessary anatomical landmarks relied upon by biomechanists for musculoskeletal modelling, has gained increasing traction in recent years. This uptake has been further accelerated by keypoint use as inputs into machine learning models used to estimate biomechanical parameters such as ground reaction forces (GRFs) in the absence of instrumentation required for direct measurement. This study first aimed to investigate the keypoint detection rate of three open-source pose estimation models (AlphaPose, BlazePose, and OpenPose) across varying movements, camera views, and trial lengths. Second, this study aimed to assess the suitability and interchangeability of keypoints detected by each pose estimation model when used as inputs into machine learning models for the estimation of GRFs. The keypoint detection rate of BlazePose was distinctly lower than that of AlphaPose and OpenPose. All pose estimation models achieved a high keypoint detection rate at the centre of an image frame and a lower detection rate in the true sagittal plane camera field of view, compared with slightly anteriorly or posteriorly located quasi-sagittal plane camera views. The three-dimensional ground reaction force, instantaneous loading rate, and peak force for running could be estimated using the keypoints of all three pose estimation models. However, only AlphaPose and OpenPose keypoints could be used interchangeably with a machine learning model trained to estimate GRFs based on AlphaPose keypoints resulting in a high estimation accuracy when OpenPose keypoints were used as inputs and vice versa. The findings of this study highlight the need for further evaluation of computer vision-based pose estimation models for application in biomechanical human modelling, and the limitations of machine learning-based GRF estimation models that rely on 2D keypoints. This is of particular relevance given that machine learning models informing athlete monitoring guidelines are being developed for application related to athlete well-being.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Di, Andreas Doering, Shanshan Zhang, Jian Yang, Juergen Gall, and Bernt Schiele. "Keypoint Message Passing for Video-Based Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 239–47. http://dx.doi.org/10.1609/aaai.v36i1.19899.

Full text
Abstract:
Video-based person re-identification~(re-ID) is an important technique in visual surveillance systems which aims to match video snippets of people captured by different cameras. Existing methods are mostly based on convolutional neural networks~(CNNs), whose building blocks either process local neighbor pixels at a time, or, when 3D convolutions are used to model temporal information, suffer from the misalignment problem caused by person movement. In this paper, we propose to overcome the limitations of normal convolutions with a human-oriented graph method. Specifically, features located at person joint keypoints are extracted and connected as a spatial-temporal graph. These keypoint features are then updated by message passing from their connected nodes with a graph convolutional network~(GCN). During training, the GCN can be attached to any CNN-based person re-ID model to assist representation learning on feature maps, whilst it can be dropped after training for better inference speed. Our method brings significant improvements over the CNN-based baseline model on the MARS dataset with generated person keypoints and a newly annotated dataset: PoseTrackReID. It also defines a new state-of-the-art method in terms of top-1 accuracy and mean average precision in comparison to prior works.
APA, Harvard, Vancouver, ISO, and other styles
37

Schmeckpeper, Karl, Philip Osteen, Yufu Wang, Georgios Pavlakos, Kenneth Chaney, Wyatt Jordan, Xiaowei Zhou, Konstantinos Derpanis, and Kostas Daniilidis. "Semantic keypoint-based pose estimation from single RGB frames." Field Robotics 2, no. 1 (March 10, 2022): 147–71. http://dx.doi.org/10.55417/fr.2022006.

Full text
Abstract:
This paper presents an approach to estimating the continuous 6-DoF pose of an object from a single RGB image. The approach combines semantic keypoints predicted by a convolutional network (convnet) with a deformable shape model. Unlike prior investigators, we are agnostic to whether the object is textured or textureless, as the convnet learns the optimal representation from the available training-image data. Furthermore, the approach can be applied to instanceand class-based pose recovery. Additionally, we accompany our main pipeline with a technique for semi-automatic data generation from unlabeled videos. This procedure allows us to train the learnable components of our method with minimal manual intervention in the labeling process. Empirically, we show that our approach can accurately recover the 6-DoF object pose for both instance- and class-based scenarios even against a cluttered background. We apply our approach both to several, existing, large-scale datasets - including PASCAL3D+, LineMOD-Occluded, YCB-Video, and TUD-Light - and, using our labeling pipeline, to a new dataset with novel object classes that we introduce here. Extensive empirical evaluations show that our approach is able to provide pose estimation results comparable to the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
38

Barnea, Shahar, and Sagi Filin. "Keypoint based autonomous registration of terrestrial laser point-clouds." ISPRS Journal of Photogrammetry and Remote Sensing 63, no. 1 (January 2008): 19–35. http://dx.doi.org/10.1016/j.isprsjprs.2007.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ko, Sang-Ki, Chang Jo Kim, Hyedong Jung, and Choongsang Cho. "Neural Sign Language Translation Based on Human Keypoint Estimation." Applied Sciences 9, no. 13 (July 1, 2019): 2683. http://dx.doi.org/10.3390/app9132683.

Full text
Abstract:
We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI (Korea Electronics Technology Institute) sign language dataset, which consists of 14,672 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting point for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from the face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieved 93.28% (55.28%, respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compared several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Gwo, Chih-Ying, Chia-Hung Wei, Yue Li, and Nan-Hsing Chiu. "Reconstruction of Banknote Fragments Based on Keypoint Matching Method." Journal of Forensic Sciences 60, no. 4 (April 21, 2015): 906–13. http://dx.doi.org/10.1111/1556-4029.12777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

DeCost, Brian L., and Elizabeth A. Holm. "Characterizing powder materials using keypoint-based computer vision methods." Computational Materials Science 126 (January 2017): 438–45. http://dx.doi.org/10.1016/j.commatsci.2016.08.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

SUZUKI, Takahiro, and Takeshi IKENAGA. "Full-HD 60fps FPGA Implementation of Spatio-Temporal Keypoint Extraction Based on Gradient Histogram and Parallelization of Keypoint Connectivity." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E99.A, no. 11 (2016): 1937–46. http://dx.doi.org/10.1587/transfun.e99.a.1937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gonçalves, Tiago, Wilson Silva, Maria J. Cardoso, and Jaime S. Cardoso. "Deep Image Segmentation for Breast Keypoint Detection." Proceedings 54, no. 1 (August 21, 2020): 35. http://dx.doi.org/10.3390/proceedings2020054035.

Full text
Abstract:
The main aim of breast cancer conservative treatment is the optimisation of the aesthetic outcome and, implicitly, women’s quality of life, without jeopardising local cancer control and overall survival. Moreover, there has been an effort to try to define an optimal tool capable of performing the aesthetic evaluation of breast cancer conservative treatment outcomes. Recently, a deep learning algorithm that integrates the learning of keypoints’ probability maps in the loss function as a regularisation term for the robust learning of the keypoint localisation has been proposed. However, it achieves the best results when used in cooperation with a shortest-path algorithm that models images as graphs. In this work, we analysed a novel algorithm based on the interaction of deep image segmentation and deep keypoint detection models capable of improving both state-of-the-art performance and execution-time on the breast keypoint detection task.
APA, Harvard, Vancouver, ISO, and other styles
44

Mousavi, Vahid, Masood Varshosaz, and Fabio Remondino. "Using Information Content to Select Keypoints for UAV Image Matching." Remote Sensing 13, no. 7 (March 29, 2021): 1302. http://dx.doi.org/10.3390/rs13071302.

Full text
Abstract:
Image matching is one of the most important tasks in Unmanned Arial Vehicles (UAV) photogrammetry applications. The number and distribution of extracted keypoints play an essential role in the reliability and accuracy of image matching and orientation results. Conventional detectors generally produce too many redundant keypoints. In this paper, we study the effect of applying various information content criteria to keypoint selection tasks. For this reason, the quality measures of entropy, spatial saliency and texture coefficient are used to select keypoints extracted using SIFT, SURF, MSER and BRISK operators. Experiments are conducted using several synthetic and real UAV image pairs. Results show that the keypoint selection methods perform differently based on the applied detector and scene type, but in most cases, the precision of the matching results is improved by an average of 15%. In general, it can be said that applying proper keypoint selection techniques can improve the accuracy and efficiency of UAV image matching and orientation results. In addition to the evaluation, a new hybrid keypoint selection is proposed that combines all of the information content criteria discussed in this paper. This new screening method was also compared with those of SIFT, which showed 22% to 40% improvement for the bundle adjustment of UAV images.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Xianghan, Jie Jiang, Yanming Guo, Lai Kang, Yingmei Wei, and Dan Li. "CFAM: Estimating 3D Hand Poses from a Single RGB Image with Attention." Applied Sciences 10, no. 2 (January 15, 2020): 618. http://dx.doi.org/10.3390/app10020618.

Full text
Abstract:
Precise 3D hand pose estimation can be used to improve the performance of human–computer interaction (HCI). Specifically, computer-vision-based hand pose estimation can make this process more natural. Most traditional computer-vision-based hand pose estimation methods use depth images as the input, which requires complicated and expensive acquisition equipment. Estimation through a single RGB image is more convenient and less expensive. Previous methods based on RGB images utilize only 2D keypoint score maps to recover 3D hand poses but ignore the hand texture features and the underlying spatial information in the RGB image, which leads to a relatively low accuracy. To address this issue, we propose a channel fusion attention mechanism that combines 2D keypoint features and RGB image features at the channel level. In particular, the proposed method replans weights by using cascading RGB images and 2D keypoint features, which enables rational planning and the utilization of various features. Moreover, our method improves the fusion performance of different types of feature maps. Multiple contrast experiments on public datasets demonstrate that the accuracy of our proposed method is comparable to the state-of-the-art accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Namuk, Sung-Chang Lim, Hyunsuk Ko, and Byeungwoo Jeon. "Keypoint-based Fast CU Depth Decision for HEVC Intra Coding." Journal of the Institute of Electronics and Information Engineers 53, no. 2 (February 25, 2016): 89–96. http://dx.doi.org/10.5573/ieie.2016.53.2.089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Tong, Fei Wang, Changlei Ru, Yong Jiang, and Jinghong Li. "Keypoint-Based Robotic Grasp Detection Scheme in Multi-Object Scenes." Sensors 21, no. 6 (March 18, 2021): 2132. http://dx.doi.org/10.3390/s21062132.

Full text
Abstract:
Robot grasping is an important direction in intelligent robots. However, how to help robots grasp specific objects in multi-object scenes is still a challenging problem. In recent years, due to the powerful feature extraction capabilities of convolutional neural networks (CNN), various algorithms based on convolutional neural networks have been proposed to solve the problem of grasp detection. Different from anchor-based grasp detection algorithms, in this paper, we propose a keypoint-based scheme to solve this problem. We model an object or a grasp as a single point—the center point of its bounding box. The detector uses keypoint estimation to find the center point and regress to all other object attributes such as size, direction, etc. Experimental results demonstrate that the accuracy of this method is 74.3% in the multi-object grasp dataset VMRD, and the performance on the single-object scene Cornell dataset is competitive with the current state-of-the-art grasp detection algorithm. Robot experiments demonstrate that this method can help robots grasp the target in single-object and multi-object scenes with overall success rates of 94% and 87%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Kong, Zelong, Nian Zhang, Xinping Guan, and Xinyi Le. "Detecting slender objects with uncertainty based on keypoint-displacement representation." Neural Networks 139 (July 2021): 246–54. http://dx.doi.org/10.1016/j.neunet.2021.03.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Jun, Xiang Li, Yifei Wei, Mei Song, and Xiaojun Wang. "Multi-Level Feature Aggregation-Based Joint Keypoint Detection and Description." Computers, Materials & Continua 73, no. 2 (2022): 2529–40. http://dx.doi.org/10.32604/cmc.2022.029542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ye, Chao, Huihui Pan, and Huijun Gao. "Keypoint-Based LiDAR-Camera Online Calibration With Robust Geometric Network." IEEE Transactions on Instrumentation and Measurement 71 (2022): 1–11. http://dx.doi.org/10.1109/tim.2021.3129882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography