Articles de revues sur le sujet « Scale Invariant Feature Descriptor »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Scale Invariant Feature Descriptor.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Scale Invariant Feature Descriptor ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Sahan, Ali Sahan, Nisreen Jabr, Ahmed Bahaaulddin et Ali Al-Itb. « Human identification using finger knuckle features ». International Journal of Advances in Soft Computing and its Applications 14, no 1 (28 mars 2022) : 88–101. http://dx.doi.org/10.15849/ijasca.220328.07.

Texte intégral
Résumé :
Abstract Many studies refer that the figure knuckle comprises unique features. Therefore, it can be utilized in a biometric system to distinguishing between the peoples. In this paper, a combined global and local features technique has been proposed based on two descriptors, namely: Chebyshev Fourier moments (CHFMs) and Scale Invariant Feature Transform (SIFT) descriptors. The CHFMs descriptor is used to gaining the global features, while the scale invariant feature transform descriptor is utilized to extract local features. Each one of these descriptors has its advantages; therefore, combining them together leads to produce distinct features. Many experiments have been carried out using IIT-Delhi knuckle database to assess the accuracy of the proposed approach. The analysis of the results of these extensive experiments implies that the suggested technique has gained 98% accuracy rate. Furthermore, the robustness against the noise has been evaluated. The results of these experiments lead to concluding that the proposed technique is robust against the noise variation. Keywords: finger knuckle, biometric system, Chebyshev Fourier moments, scale invariant feature transform, IIT-Delhi knuckle database.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Xu, Xinggui, Ping Yang, Bing Ran, Hao Xian et Yong Liu. « Long-distance deformation object recognition by integrating contour structure and scale-invariant heat kernel signature ». Journal of Intelligent & ; Fuzzy Systems 39, no 3 (7 octobre 2020) : 3241–57. http://dx.doi.org/10.3233/jifs-191649.

Texte intégral
Résumé :
The tough challenges of object recognition in long-distance scene involves contour shape deformation invariant features construction. In this work, an effective contour shape descriptor integrating critical points structure and Scale-invariant Heat Kernel Signature (SI-HKS) is proposed for long-distance object recognition. We firstly propose a general feature fusion model. Then, we capture the object contour structure feature with Critical-points Inner-distance Shape Context (CP-IDSC). Meanwhile, we pull-in the SI-HKS for capturing the local deformation-invariant properties of 2D shape. Based on the integration of the above two feature descriptors, the fusion descriptor is compacted by mapping into a low dimensional subspace using the bags-of-features, allowing for an efficient Bayesian classifier recognition. The extensive experiments on synthetic turbulence-degraded shapes and real-life infrared image show that the proposed method outperformed other compared approaches in terms of the recognition precision and robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Diaz-Escobar, Julia, Vitaly Kober et Jose A. Gonzalez-Fraga. « LUIFT : LUminance Invariant Feature Transform ». Mathematical Problems in Engineering 2018 (28 octobre 2018) : 1–17. http://dx.doi.org/10.1155/2018/3758102.

Texte intégral
Résumé :
Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Styles APA, Harvard, Vancouver, ISO, etc.
4

El Chakik, Abdallah, Abdul Rahman El Sayed, Hassan Alabboud et Amer Bakkach. « An invariant descriptor map for 3D objects matching ». International Journal of Engineering & ; Technology 9, no 1 (23 janvier 2020) : 59. http://dx.doi.org/10.14419/ijet.v9i1.29918.

Texte intégral
Résumé :
Meshes and point clouds are traditionally used to represent and match 3D shapes. The matching prob-lem can be formulated as finding the best one-to-one correspondence between featured regions of two shapes. This paper presents an efficient and robust 3D matching method using vertices descriptors de-tection to define feature regions and an optimization approach for regions matching. To do so, we compute an invariant shape descriptor map based on 3D surface patches calculated using Zernike coef-ficients. Then, we propose a multi-scale descriptor map to improve the measured descriptor map quali-ty and to deal with noise. In addition, we introduce a linear algorithm for feature regions segmentation according to the descriptor map. Finally, the matching problem is modelled as sub-graph isomorphism problem, which is a combinatorial optimization problem to match feature regions while preserving the geometric. Finally, we show the robustness and stability of our method through many experimental re-sults with respect to scaling, noise, rotation, and translation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ajayi, O. G. « PERFORMANCE ANALYSIS OF SELECTED FEATURE DESCRIPTORS USED FOR AUTOMATIC IMAGE REGISTRATION ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (21 août 2020) : 559–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-559-2020.

Texte intégral
Résumé :
Abstract. Automatic detection and extraction of corresponding features is very crucial in the development of an automatic image registration algorithm. Different feature descriptors have been developed and implemented in image registration and other disciplines. These descriptors affect the speed of feature extraction and the measure of extracted conjugate features, which affects the processing speed and overall accuracy of the registration scheme. This article is aimed at reviewing the performance of most-widely implemented feature descriptors in an automatic image registration scheme. Ten (10) descriptors were selected and analysed under seven (7) conditions viz: Invariance to rotation, scale and zoom, their robustness, repeatability, localization and efficiency using UAV acquired images. The analysis shows that though four (4) descriptors performed better than the other Six (6), no single feature descriptor can be affirmed to be the best, as different descriptors perform differently under different conditions. The Modified Harris and Stephen Corner Detector (MHCD) proved to be invariant to scale and zoom while it is excellent in robustness, repeatability, localization and efficiency, but it is variant to rotation. Also, the Scale Invariant feature Transform (SIFT), Speeded Up Robust Features (SURF) and the Maximally Stable Extremal Region (MSER) algorithms proved to be invariant to scale, zoom and rotation, and very good in terms of repeatability, localization and efficiency, though MSER proved to be not as robust as SIFT and SURF. The implication of the findings of this research is that the choice of feature descriptors must be informed by the imaging conditions of the image registration analysts.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tikhomirova, T. A., G. T. Fedorenko, K. M. Nazarenko et E. S. Nazarenko. « LEFT : LOCAL EDGE FEATURES TRANSFORM ». Vestnik komp'iuternykh i informatsionnykh tekhnologii, no 189 (mars 2020) : 11–18. http://dx.doi.org/10.14489/vkit.2020.03.pp.011-018.

Texte intégral
Résumé :
To detect point correspondence between images or 3D scenes, local texture descriptors, such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-Up Robust Features), BRIEF (Binary Robust Independent Elementary Features), and others, are usually used. Formally they provide invariance to image rotation and scale, but this properties are achieved only approximately due to discrete number of evaluable orientations and scales stored into the descriptor. Feature points preferable for such descriptors usually are not belong to actual object boundaries into 3D scenes and so are hard to be used into apipolar relationships. At the same time, linking the feature point to large-scale lines and edges is preferable for SLAM (Simultaneous Localization And Mapping) tasks, because their appearance are the most resistible to daily, seasonal and weather variations.In this paper, original feature points descriptor LEFT (Local Edge Features Transform) for edge images are proposed. LEFT accumulate directions and contrasts of alternative strait segments tangent to lines and edges in the vicinity of feature points. Due to this structure, mutual orientation of LEFT descriptors are evaluated and taken into account directly at the stage of their comparison. LEFT descriptors adapt to the shape of contours in the vicinity of feature points, so they can be used to analyze local and global geometric distortions of a various nature. The article presents the results of comparative testing of LEFT and common texture-based descriptors and considers alternative ways of representing them in a computer vision system.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Wanyuan, Tian Zhou, Chao Xu et Meiqin Liu. « A SIFT-Like Feature Detector and Descriptor for Multibeam Sonar Imaging ». Journal of Sensors 2021 (15 juillet 2021) : 1–14. http://dx.doi.org/10.1155/2021/8845814.

Texte intégral
Résumé :
Multibeam imaging sonar has become an increasingly important tool in the field of underwater object detection and description. In recent years, the scale-invariant feature transform (SIFT) algorithm has been widely adopted to obtain stable features of objects in sonar images but does not perform well on multibeam sonar images due to its sensitivity to speckle noise. In this paper, we introduce MBS-SIFT, a SIFT-like feature detector and descriptor for multibeam sonar images. This algorithm contains a feature detector followed by a local feature descriptor. A new gradient definition robust to speckle noise is presented to detect extrema in scale space, and then, interest points are filtered and located. It is also used to assign orientation and generate descriptors of interest points. Simulations and experiments demonstrate that the proposed method can capture features of underwater objects more accurately than existing approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gao, Junchai, et Zhen Sun. « An Improved ASIFT Image Feature Matching Algorithm Based on POS Information ». Sensors 22, no 20 (12 octobre 2022) : 7749. http://dx.doi.org/10.3390/s22207749.

Texte intégral
Résumé :
The affine scale-invariant feature transform (ASIFT) algorithm is a feature extraction algorithm with affinity and scale invariance, which is suitable for image feature matching using unmanned aerial vehicles (UAVs). However, there are many problems in the matching process, such as the low efficiency and mismatching. In order to improve the matching efficiency, this algorithm firstly simulates image distortion based on the position and orientation system (POS) information from real-time UAV measurements to reduce the number of simulated images. Then, the scale-invariant feature transform (SIFT) algorithm is used for feature point detection, and the extracted feature points are combined with the binary robust invariant scalable keypoints (BRISK) descriptor to generate the binary feature descriptor, which is matched using the Hamming distance. Finally, in order to improve the matching accuracy of the UAV images, based on the random sample consensus (RANSAC) a false matching eliminated algorithm is proposed. Through four groups of experiments, the proposed algorithm is compared with the SIFT and ASIFT. The results show that the algorithm can optimize the matching effect and improve the matching speed.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Barajas-García, Carolina, Selene Solorza-Calderón et Everardo Gutiérrez-López. « Scale, translation and rotation invariant Wavelet Local Feature Descriptor ». Applied Mathematics and Computation 363 (décembre 2019) : 124594. http://dx.doi.org/10.1016/j.amc.2019.124594.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

FERRAZ, CAROLINA TOLEDO, OSMANDO PEREIRA, MARCOS VERDINI ROSA et ADILSON GONZAGA. « OBJECT RECOGNITION BASED ON BAG OF FEATURES AND A NEW LOCAL PATTERN DESCRIPTOR ». International Journal of Pattern Recognition and Artificial Intelligence 28, no 08 (décembre 2014) : 1455010. http://dx.doi.org/10.1142/s0218001414550106.

Texte intégral
Résumé :
Bag of Features (BoF) has gained a lot of interest in computer vision. Visual codebook based on robust appearance descriptors extracted from local image patches is an effective means of texture analysis and scene classification. This paper presents a new method for local feature description based on gray-level difference mapping called Mean Local Mapped Pattern (M-LMP). The proposed descriptor is robust to image scaling, rotation, illumination and partial viewpoint changes. The training set is composed of rotated and scaled images, with changes in illumination and view points. The test set is composed of rotated and scaled images. The proposed descriptor more effectively captures smaller differences of the image pixels than similar ones. In our experiments, we implemented an object recognition system based on the M-LMP and compared our results to the Center-Symmetric Local Binary Pattern (CS-LBP) and the Scale-Invariant Feature Transform (SIFT). The results for object classification were analyzed in a BoF methodology and show that our descriptor performs better compared to these two previously published methods.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Liu, Yazhou, Pongsak Lasang, Mel Siegel et Quansen Sun. « Multi-sparse descriptor : A scale invariant feature for pedestrian detection ». Neurocomputing 184 (avril 2016) : 55–65. http://dx.doi.org/10.1016/j.neucom.2015.07.143.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Qu, Xiujie, Fei Zhao, Mengzhe Zhou et Haili Huo. « A Novel Fast and Robust Binary Affine Invariant Descriptor for Image Matching ». Mathematical Problems in Engineering 2014 (2014) : 1–7. http://dx.doi.org/10.1155/2014/129230.

Texte intégral
Résumé :
As the current binary descriptors have disadvantages of high computational complexity, no affine invariance, and the high false matching rate with viewpoint changes, a new binary affine invariant descriptor, called BAND, is proposed. Different from other descriptors, BAND has an irregular pattern, which is based on local affine invariant region surrounding a feature point, and it has five orientations, which are obtained by LBP effectively. Ultimately, a 256 bits binary string is computed by simple random sampling pattern. Experimental results demonstrate that BAND has a good matching result in the conditions of rotating, image zooming, noising, lighting, and small-scale perspective transformation. It has better matching performance compared with current mainstream descriptors, while it costs less time.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Wang, Sheng Ke, Lili Liu et Xiaowei Xu. « Vehicle Logo Recognition Based on Local Feature Descriptor ». Applied Mechanics and Materials 263-266 (décembre 2012) : 2418–21. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2418.

Texte intégral
Résumé :
In this paper, we present a comparison of the scale-invariant feature transforms (SIFT)-based feature-matching scheme and the speeded up robust features (SURF)-based feature-matching scheme in the field of vehicle logo recognition. We capture a set of logo images which are varied in illumination, blur, scale, and rotation. Six kinds of vehicle logo training set are formed using 25 images in average and the rest images are used to form the testing set. The Logo Recognition system that we programmed indicates a high recognition rate of the same kind of query images through adjusting different parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Thirthe Gowda, M. T., et J. Chandrika. « Optimized Scale-Invariant Hog Descriptors for Tobacco Plant Detection ». WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT 17 (23 juillet 2021) : 787–94. http://dx.doi.org/10.37394/232015.2021.17.74.

Texte intégral
Résumé :
The histogram of gradient (HOG) descriptor is being employed in this research work to demonstrate the technique of scale variant to identify the plant in surveillance videos. In few scenarios, the discrepancies in the histogram of gradient descriptors along with scale as well as variation in illumination are considered as one of the major hindrances. This research work introduces a unique SIO-HOG descriptor that is approximated to be scale-invariant. With the help of the footage that is captured from the tobacco plant identification process, the system can integrate adoptive bin selections as well as sample resizing. Further, this research work explores the impact of a PCA transform that is based on the process of feature selection on the performance of overall recognition and thereby considering finite scale range, adoptive orientation binning in non-overlapping descriptors, as well as finite scale range are all essential for a high detection rate. The feature vector of HOG over a complete search window is computationally intensive. However, suitable frameworks for classification can be developed by maintaining a precise range of attributes with finite Euclidean distance. Experimental results prove that the proposed approach for detecting tobacco from other weeds has resulted in an improved detection rate. And finally, the robustness of the complete plant detection system was evaluated on a video sequence with different non-linearity's that is quite common in a real-world environment and its performance metrics are evaluated
Styles APA, Harvard, Vancouver, ISO, etc.
15

Sicong, Yue, Wang Qing et Zhao Rongchun. « Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor ». Chinese Journal of Aeronautics 22, no 1 (février 2009) : 70–74. http://dx.doi.org/10.1016/s1000-9361(08)60070-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Cui, Song, Miaozhong Xu, Ailong Ma et Yanfei Zhong. « Modality-Free Feature Detector and Descriptor for Multimodal Remote Sensing Image Registration ». Remote Sensing 12, no 18 (10 septembre 2020) : 2937. http://dx.doi.org/10.3390/rs12182937.

Texte intégral
Résumé :
The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear mapping of the corresponding features of the multimodal images often results in failure of the feature matching, as well as the image registration. In this paper, a modality-free multimodal remote sensing image registration method (SRIFT) is proposed for the registration of multimodal remote sensing images, which is invariant to scale, radiation, and rotation. In SRIFT, the nonlinear diffusion scale (NDS) space is first established to construct a multi-scale space. A local orientation and scale phase congruency (LOSPC) algorithm are then used so that the features of the images with NRD are mapped to establish a one-to-one correspondence, to obtain sufficiently stable key points. In the feature description stage, a rotation-invariant coordinate (RIC) system is adopted to build a descriptor, without requiring estimation of the main direction. The experiments undertaken in this study included one set of simulated data experiments and nine groups of experiments with different types of real multimodal remote sensing images with rotation and scale differences (including synthetic aperture radar (SAR)/optical, digital surface model (DSM)/optical, light detection and ranging (LiDAR) intensity/optical, near-infrared (NIR)/optical, short-wave infrared (SWIR)/optical, classification/optical, and map/optical image pairs), to test the proposed algorithm from both quantitative and qualitative aspects. The experimental results showed that the proposed method has strong robustness to NRD, being invariant to scale, radiation, and rotation, and the achieved registration precision was better than that of the state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Dong, Chuan-Zhi, et F. Necati Catbas. « A non-target structural displacement measurement method using advanced feature matching strategy ». Advances in Structural Engineering 22, no 16 (11 juin 2019) : 3461–72. http://dx.doi.org/10.1177/1369433219856171.

Texte intégral
Résumé :
Most of the existing vision-based displacement measurement methods require manual speckles or targets to improve the measurement performance in non-stationary imagery environments. To minimize the use of manual speckles and targets, feature points regarded as virtual markers can be utilized for non-target measurement. In this study, an advanced feature matching strategy is presented, which replaces the handcrafted descriptors with learned descriptors called Visual Geometry Group, of the University of Oxford descriptors to achieve better performance. The feasibility and performance of the proposed method is verified by comparative studies with a laboratory experiment on a two-span bridge model and then with a field application on a railway bridge. The proposed approach of integrated use of Scale Invariant Feature Transform and Visual Geometry Group improved the measurement accuracy by about 24% when compared with the commonly used existing feature matching-based displacement measurement method using Scale Invariant Feature Transform feature and descriptor.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Yu, Meng, Dong Zhang, Dah-Jye Lee et Alok Desai. « SR-SYBA : A Scale and Rotation Invariant Synthetic Basis Feature Descriptor with Low Memory Usage ». Electronics 9, no 5 (15 mai 2020) : 810. http://dx.doi.org/10.3390/electronics9050810.

Texte intégral
Résumé :
Feature description has an important role in image matching and is widely used for a variety of computer vision applications. As an efficient synthetic basis feature descriptor, SYnthetic BAsis (SYBA) requires low computational complexity and provides accurate matching results. However, the number of matched feature points generated by SYBA suffers from large image scaling and rotation variations. In this paper, we improve SYBA’s scale and rotation invariance by adding an efficient pre-processing operation. The proposed algorithm, SR-SYBA, represents the scale of the feature region with the location of maximum gradient response along the radial direction in Log-polar coordinate system. Based on this scale representation, it normalizes all feature regions to the same reference scale to provide scale invariance. The orientation of the feature region is represented as the orientation of the vector from the center of the feature region to its intensity centroid. Based on this orientation representation, all feature regions are rotated to the same reference orientation to provide rotation invariance. The original SYBA descriptor is then applied to the scale and orientation normalized feature regions for description and matching. Experiment results show that SR-SYBA greatly improves SYBA for image matching applications with scaling and rotation variations. SR-SYBA obtains comparable or better performance in terms of matching rate compared to the mainstream algorithms while still maintains its advantages of using much less storage and simpler computations. SR-SYBA is applied to a vision-based measurement application to demonstrate its performance for image matching.
Styles APA, Harvard, Vancouver, ISO, etc.
19

KITAGAWA, Masamichi, et Ikuko SHIMIZU. « Memory Saving Feature Descriptor Using Scale and Rotation Invariant Patches around the Feature Ppoints ». IEICE Transactions on Information and Systems E102.D, no 5 (1 mai 2019) : 1106–10. http://dx.doi.org/10.1587/transinf.2018edl8176.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

GENG, LICHUAN, SONGZHI SU, DONGLIN CAO et SHAOZI LI. « PERSPECTIVE-INVARIANT IMAGE MATCHING FRAMEWORK WITH BINARY FEATURE DESCRIPTOR AND APSO ». International Journal of Pattern Recognition and Artificial Intelligence 28, no 08 (décembre 2014) : 1455011. http://dx.doi.org/10.1142/s0218001414550118.

Texte intégral
Résumé :
A novel perspective invariant image matching framework is proposed in this paper, noted as Perspective-Invariant Binary Robust Independent Elementary Features (PBRIEF). First, we use the homographic transformation to simulate the distortion between two corresponding patches around the feature points. Then, binary descriptors are constructed by comparing the intensity of sample points surrounding the feature location. We transform the location of the sample points with simulated homographic matrices. This operation is to ensure that the intensities which we compared are the realistic corresponding pixels between two image patches. Since the exact perspective transform matrix is unknown, an Adaptive Particle Swarm Optimization (APSO) algorithm-based iterative procedure is proposed to estimate the real transformation angles. Experimental results obtained on five different datasets show that PBRIEF outperforms significantly the existing methods on images with large viewpoint difference. Moreover, the efficiency of our framework is also improved comparing with Affine-Scale Invariant Feature Transform (ASIFT).
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Hua-Zhen, Dong-Won Kim, Tae-Koo Kang et Myo-Taeg Lim. « MIFT : A Moment-Based Local Feature Extraction Algorithm ». Applied Sciences 9, no 7 (11 avril 2019) : 1503. http://dx.doi.org/10.3390/app9071503.

Texte intégral
Résumé :
We propose a local feature descriptor based on moment. Although conventional scale invariant feature transform (SIFT)-based algorithms generally use difference of Gaussian (DoG) for feature extraction, they remain sensitive to more complicated deformations. To solve this problem, we propose MIFT, an invariant feature transform algorithm based on the modified discrete Gaussian-Hermite moment (MDGHM). Taking advantage of MDGHM’s high performance to represent image information, MIFT uses an MDGHM-based pyramid for feature extraction, which can extract more distinctive extrema than the DoG, and MDGHM-based magnitude and orientation for feature description. We compared the proposed MIFT method performance with current best practice methods for six image deformation types, and confirmed that MIFT matching accuracy was superior of other SIFT-based methods.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Anbarasu, B., et G. Anitha. « Indoor Scene recognition for Micro Aerial Vehicles Navigation using Enhanced SIFT-ScSPM Descriptors ». Journal of Navigation 73, no 1 (5 juillet 2019) : 37–55. http://dx.doi.org/10.1017/s0373463319000420.

Texte intégral
Résumé :
In this paper, a new scene recognition visual descriptor called Enhanced Scale Invariant Feature Transform-based Sparse coding Spatial Pyramid Matching (Enhanced SIFT-ScSPM) descriptor is proposed by combining a Bag of Words (BOW)-based visual descriptor (SIFT-ScSPM) and Gist-based descriptors (Enhanced Gist-Enhanced multichannel Gist (Enhanced mGist)). Indoor scene classification is carried out by multi-class linear and non-linear Support Vector Machine (SVM) classifiers. Feature extraction methodology and critical review of several visual descriptors used for indoor scene recognition in terms of experimental perspectives have been discussed in this paper. An empirical study is conducted on the Massachusetts Institute of Technology (MIT) 67 indoor scene classification data set and assessed the classification accuracy of state-of-the-art visual descriptors and the proposed Enhanced mGist, Speeded Up Robust Features-Spatial Pyramid Matching (SURF-SPM) and Enhanced SIFT-ScSPM visual descriptors. Experimental results show that the proposed Enhanced SIFT-ScSPM visual descriptor performs better with higher classification rate, precision, recall and area under the Receiver Operating Characteristic (ROC) curve values with respect to the state-of-the-art and the proposed Enhanced mGist and SURF-SPM visual descriptors.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Wang, Yan Wei, et Hui Li Yu. « Image Registration Method Based on PCA-SIFT Feature Detection ». Advanced Materials Research 712-715 (juin 2013) : 2395–98. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2395.

Texte intégral
Résumé :
SIFT (scale invariant key points) which can well handle images with varying orientation and zoom, is widely used in image registration, but the algorithm is complexity and the processing time is too long. Therefore we used the PCA-SIFT (Principle Components Analysis-scale invariant key points) in image registration. Compared the SIFT descriptor, the PCA-SIFT reduced the dimensions of SIFT feature, enhanced the matching accuracy and reduce the elapsed time. Then the mutual information method used in this paper to estimate the best points. Experimental results show that PCA-SIFT algorithm is simplified, robust and liable.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Jiaming, Xuejuan Hu, Tan Zhang, Shiqian Liu, Kai Hu, Ting He, Xiaokun Yang et al. « Binary Neighborhood Coordinate Descriptor for Circuit Board Defect Detection ». Electronics 12, no 6 (17 mars 2023) : 1435. http://dx.doi.org/10.3390/electronics12061435.

Texte intégral
Résumé :
Due to the periodicity of circuit boards, the registration algorithm based on keypoints is less robust in circuit board detection and is prone to misregistration problems. In this paper, the binary neighborhood coordinate descriptor (BNCD) is proposed and applied to circuit board image registration. The BNCD consists of three parts: neighborhood description, coordinate description, and brightness description. The neighborhood description contains the grayscale information of the neighborhood, which is the main part of BNCD. The coordinate description introduces the actual position of the keypoints in the image, which solves the problem of inter-period matching of keypoints. The brightness description introduces the concept of bright and dark points, which improves the distinguishability of BNCD and reduces the calculation amount of matching. Experimental results show that in circuit board image registration, the matching precision rate and recall rate of BNCD is better than that of classic algorithms such as scale-invariant feature transform (SIFT) and speeded up robust features (SURF), and the calculation of descriptors takes less time.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Holliday, Andrew, et Gregory Dudek. « Scale-invariant localization using quasi-semantic object landmarks ». Autonomous Robots 45, no 3 (25 février 2021) : 407–20. http://dx.doi.org/10.1007/s10514-021-09973-w.

Texte intégral
Résumé :
AbstractThis work presents Object Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. An Object Landmark consists of a bounding box $${\mathbf {b}}$$ b defining an object, a descriptor $${\mathbf {q}}$$ q of that object produced by a Convolutional Neural Network, and a set of classical point features within $${\mathbf {b}}$$ b . We evaluate Object Landmarks on visual odometry and place-recognition tasks, and compare them against several modern approaches. We find that Object Landmarks enable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Dong, Yunyun, Weili Jiao, Tengfei Long, Lanfa Liu, Guojin He, Chengjuan Gong et Yantao Guo. « Local Deep Descriptor for Remote Sensing Image Feature Matching ». Remote Sensing 11, no 4 (19 février 2019) : 430. http://dx.doi.org/10.3390/rs11040430.

Texte intégral
Résumé :
Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is a vital process to determine the quality of transform model. While in the process of feature matching, the quality of feature descriptor determines the matching result directly. At present, the most commonly used descriptor is hand-crafted by the designer’s expertise or intuition. However, it is hard to cover all the different cases, especially for remote sensing images with nonlinear grayscale deformation. Recently, deep learning shows explosive growth and improves the performance of tasks in various fields, especially in the computer vision community. Here, we created remote sensing image training patch samples, named Invar-Dataset in a novel and automatic way, then trained a deep learning convolutional neural network, named DescNet to generate a robust feature descriptor for feature matching. A special experiment was carried out to illustrate that our created training dataset was more helpful to train a network to generate a good feature descriptor. A qualitative experiment was then performed to show that feature descriptor vector learned by the DescNet could be used to register remote sensing images with large gray scale difference successfully. A quantitative experiment was then carried out to illustrate that the feature vector generated by the DescNet could acquire more matched points than those generated by hand-crafted feature Scale Invariant Feature Transform (SIFT) descriptor and other networks. On average, the matched points acquired by DescNet was almost twice those acquired by other methods. Finally, we analyzed the advantages of our created training dataset Invar-Dataset and DescNet and gave the possible development of training deep descriptor network.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Li, Jing, et Tao Yang. « Efficient and Robust Feature Matching via Local Descriptor Generalized Hough Transform ». Applied Mechanics and Materials 373-375 (août 2013) : 536–40. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.536.

Texte intégral
Résumé :
Robust and efficient indistinctive feature matching and outliers removal is an essential problem in many computer vision applications. In this paper we present a simple and fast algorithm named as LDGTH (Local Descriptor Generalized Hough Transform) to handle this problem. The main characteristics of the proposed method include: (1) A novel local descriptor generalized hough transform framework is presented in which the local geometric characteristics of invariant feature descriptors are fused together as a global constraint for feature correspondence verification. (2) Different from standard generalized hough transform, our approach greatly reduces the computational and storage requirements of parameter space through taking advantage of the invariant feature correspondences. (3) The proposed algorithm can be seamlessly embedded into the existing image matching framework, and significantly improve the image matching performance both in speed and robustness in challenge conditions. In the experiment we use both synthetic image data and real world data with high outliers ratio and severe changes in view point, scale, illumination, image blur, compression and noises to evaluate the proposed method, and the results demonstrate that our approach achieves achieves faster and better matching performance compared to the traditional algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Hagiwara, Hayato, Yasufumi Touma, Kenichi Asami et Mochimitsu Komori. « FPGA-Based Stereo Vision System Using Gradient Feature Correspondence ». Journal of Robotics and Mechatronics 27, no 6 (18 décembre 2015) : 681–90. http://dx.doi.org/10.20965/jrm.2015.p0681.

Texte intégral
Résumé :
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/10.jpg"" width=""300"" /> Mobile robot with a stereo vision</div>This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Guo, Sheng, Weilin Huang et Yu Qiao. « Improving scale invariant feature transform with local color contrastive descriptor for image classification ». Journal of Electronic Imaging 26, no 1 (15 février 2017) : 013015. http://dx.doi.org/10.1117/1.jei.26.1.013015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Daoud, Luka, Muhammad Kamran Latif, H. S. Jacinto et Nader Rafla. « A fully pipelined FPGA accelerator for scale invariant feature transform keypoint descriptor matching ». Microprocessors and Microsystems 72 (février 2020) : 102919. http://dx.doi.org/10.1016/j.micpro.2019.102919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

S. Sathiya, Devi. « Texture classification with modified rotation invariant local binary pattern and gradient boosting ». International Journal of Knowledge-based and Intelligent Engineering Systems 26, no 2 (29 septembre 2022) : 125–36. http://dx.doi.org/10.3233/kes220012.

Texte intégral
Résumé :
Since texture is prominent low level feature of an image, most of the image processing and computer vision applications rely on this feature for efficient extraction, retrieval, visualization and classification of the images. Hence, the texture analysis method mainly concentrates on efficient feature extraction and representation of the image. The images captured and analyzed in many of the applications are not in same (or) similar scale, orientation and illumination and also texture has regular, stochastic, periodic, homogeneous (or) inhomogeneous and directional in nature. To address these issues, recent texture analysis method focused on efficient and invariant feature extraction and representation with reduced dimension. Hence this paper proposes a invariant texture descriptor, Locality preserving Rotation Invariant Modified Directional Local Binary Pattern (LRIMDLBP) based on LBP. The classical LBP descriptor is widely used in most of the texture analysis applications due to its simplicity and robustness to illumination changes. However, it does not efficiently capture the discriminative texture information because it uses sign information and ignores the magnitude value of the neighborhood and also suffers from high dimensionality. Hence to improve the performance of LBP, many variants are proposed. Though most of these LBP variants are either geometrical or direction invariant, fails to address the spatial locality and contrast invariance. To address these issues, the proposed LRIMDLBP incorporates spatial locality, contrast and direction information for rotation invariant texture descriptor with reduced dimension. The proposed LRIMDLBP consists of 5 phases: (i) Reference point identification, (ii) Magnitude calculation, (iii) Binary Label computation based on threshold, (iv) Pattern identification in dominant direction and (v) LRIMDLBP code computation. The locality and rotation invariance is achieved by identifying and using reference point in a local neighborhood. The reference point is a dominant pixel whose magnitude is large in the neighborhood excluding center pixel. The spatial locality and rotation invariance is achieved by preserving the weights of LBP dynamically based on the reference point. The proposed method also preserves the direction information of the texture by comparing the magnitude of the pixel in the four dominant directions such as horizontal, vertical, diagonal and anti-diagonal directions. Finally the proposed invariant LRIMDLBP descriptor computes histogram based on decimal pattern value. The proposed LRIMDLBP descriptor results in texture feature with reduced dimension when compared to other LBP variants. The performance of the proposed descriptor is evaluated with large and well known four bench mark texture datasets namely (i) CUReT, (ii) Outex, (iii) KTS-TIPS and (iv) UIUC against three classifiers such as (i). K-Nearest Neighbor (K-NN), (ii). Support Vector Machine (SVM) with Radial Basis Function (RBF) and (iii). Gradient Boosting Classifier (GBC). The intensive experimental result shows that the ensemble based GBC yields superior classification accuracy of 99.38%, 99.43%, 98.67% and 98.82% for the datasets CUReT, Outex, KTH-TIPS and UIUC respectively when compared with other two classifiers and also improves the generalization ability. The proposed LRIMDLBP descriptor achieves approximately 15% more classification accuracy when compared with traditional LBP and also produces 1% to 2.5% more classification accuracy compared with other state of the art LBP variants.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Gafour, Yacine, et Djamel Berrabah. « New Approach to Improve the Classification Process of Multi-Class Objects ». International Journal of Organizational and Collective Intelligence 10, no 2 (avril 2020) : 1–19. http://dx.doi.org/10.4018/ijoci.2020040101.

Texte intégral
Résumé :
In recent years, several descriptors have been proposed in many image classification applications. Accelerated-KAZE (A-KAZE) is considered one of the descriptors that has shown high performance for feature extraction. A-KAZE uses a binary descriptor called modified-local difference binary, which is very efficient and invariant to changes in rotation and scale. This representation does not allow spatial information to be considered between objects in the image, which makes it possible to reduce the performances of the classification of the images. This article broaches a new approach to improve the performance of the A-KAZE descriptor for image classification. The authors first establish the connection between the A-KAZE descriptor and the bag of feature model. Then the Spatial Pyramid Matching (SPM) is adopted by exploiting the A-KAZE descriptor to reinforce its robustness by introducing spatial information. The results of the experiments on several datasets show that the A-KAZE descriptor with SPM gives very satisfactory results compared with other existing methods in the state of the art.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Ding, Can, Chang Wen Qu et Feng Su. « An Improved SIFT Matching Algorithm ». Applied Mechanics and Materials 239-240 (décembre 2012) : 1232–37. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.1232.

Texte intégral
Résumé :
The high dimension and complexity of feature descriptor of Scale Invariant Feature Transform (SIFT), not only occupy the memory spaces, but also influence the speed of feature matching. We adopt the statistic feature point’s neighbor gradient method, the local statistic area is constructed by 8 concentric square ring feature of points-centered, compute gradient of these pixels, and statistic gradient accumulated value of 8 directions, and then descending sort them, at last normalize them. The new feature descriptor descend dimension of feature from 128 to 64, the proposed method can improve matching speed and keep matching precision at the same time.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Li, Ruoxiang, Dianxi Shi, Yongjun Zhang, Ruihao Li et Mingkun Wang. « Asynchronous event feature generation and tracking based on gradient descriptor for event cameras ». International Journal of Advanced Robotic Systems 18, no 4 (1 juillet 2021) : 172988142110270. http://dx.doi.org/10.1177/17298814211027028.

Texte intégral
Résumé :
Recently, the event camera has become a popular and promising vision sensor in the research of simultaneous localization and mapping and computer vision owing to its advantages: low latency, high dynamic range, and high temporal resolution. As a basic part of the feature-based SLAM system, the feature tracking method using event cameras is still an open question. In this article, we present a novel asynchronous event feature generation and tracking algorithm operating directly on event-streams to fully utilize the natural asynchronism of event cameras. The proposed algorithm consists of an event-corner detection unit, a descriptor construction unit, and an event feature tracking unit. The event-corner detection unit addresses a fast and asynchronous corner detector to extract event-corners from event-streams. For the descriptor construction unit, we propose a novel asynchronous gradient descriptor inspired by the scale-invariant feature transform descriptor, which helps to achieve quantitative measurement of similarity between event feature pairs. The construction of the gradient descriptor can be decomposed into three stages: speed-invariant time surface maintenance and extraction, principal orientation calculation, and descriptor generation. The event feature tracking unit combines the constructed gradient descriptor and an event feature matching method to achieve asynchronous feature tracking. We implement the proposed algorithm in C++ and evaluate it on a public event dataset. The experimental results show that our proposed method achieves improvement in terms of tracking accuracy and real-time performance when compared with the state-of-the-art asynchronous event-corner tracker and with no compromise on the feature tracking lifetime.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kuo, Chien-Hung, Erh-Hsu Huang, Chiang-Heng Chien et Chen-Chien Hsu. « FPGA Design of Enhanced Scale-Invariant Feature Transform with Finite-Area Parallel Feature Matching for Stereo Vision ». Electronics 10, no 14 (8 juillet 2021) : 1632. http://dx.doi.org/10.3390/electronics10141632.

Texte intégral
Résumé :
In this paper, we propose an FPGA-based enhanced-SIFT with feature matching for stereo vision. Gaussian blur and difference of Gaussian pyramids are realized in parallel to accelerate the processing time required for multiple convolutions. As for the feature descriptor, a simple triangular identification approach with a look-up table is proposed to efficiently determine the direction and gradient of the feature points. Thus, the dimension of the feature descriptor in this paper is reduced by half compared to conventional approaches. As far as feature detection is concerned, the condition for high-contrast detection is simplified by moderately changing a threshold value, which also benefits the reduction of the resulting hardware in realization. The proposed enhanced-SIFT not only accelerates the operational speed but also reduces the hardware cost. The experiment results show that the proposed enhanced-SIFT reaches a frame rate of 205 fps for 640 × 480 images. Integrated with two enhanced-SIFT, a finite-area parallel checking is also proposed without the aid of external memory to improve the efficiency of feature matching. The resulting frame rate by the proposed stereo vision matching can be as high as 181 fps with good matching accuracy as demonstrated in the experimental results.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Hasheminasab, M., H. Ebadi et A. Sedaghat. « AN INTEGRATED RANSAC AND GRAPH BASED MISMATCH ELIMINATION APPROACH FOR WIDE-BASELINE IMAGE MATCHING ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (11 décembre 2015) : 297–300. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-297-2015.

Texte intégral
Résumé :
In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Wang, Yan Wei, Si Qing Zhang, Bing Lin, Hong Liang et Yan Ming Pan. « Feature Point Extraction Method of X-Ray Image Based on Scale Invariant ». Applied Mechanics and Materials 274 (janvier 2013) : 667–70. http://dx.doi.org/10.4028/www.scientific.net/amm.274.667.

Texte intégral
Résumé :
Feature Point Extraction Method of X-ray Image Based on Scale Invariant is proposed in this paper for industrial X-ray image with low contrast and some artifacts. First of all, the scale transformation of original image is adopted by the Gaussian kernel to building the DOG multi-scale pyramid. Then, the location and scale of the key points is fixed by the three-dimensional quadratic function. Finally, the Simply SIFT descriptor illustrates the key points. Experimental results show that the algorithm has good stability in translation, rotation and affine transformation, especially with 10 percent normalized Gaussian noise, this algorithm can still be detected feature points accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Wang, Lei, Chunhong Chang, Zhouqi Liu, Jin Huang, Cong Liu et Chunxiang Liu. « A Medical Image Fusion Method Based on SIFT and Deep Convolutional Neural Network in the SIST Domain ». Journal of Healthcare Engineering 2021 (21 avril 2021) : 1–8. http://dx.doi.org/10.1155/2021/9958017.

Texte intégral
Résumé :
The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Xu, Yun Xi, et Fang Chen. « Real-Time and Robust Stereo Visual Navigation Localization Algorithm Based on ORB ». Applied Mechanics and Materials 241-244 (décembre 2012) : 478–82. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.478.

Texte intégral
Résumé :
The biggest challenge of visual navigation localization is feature extraction and association. Currently, the most widely used method is simple corner feature and simple matching strategy based on SAD or NCC. Another option is scale invariant feature and rotation invariant descriptor, typically as SIFT, SURF. Feature extraction and matching methods based on the SIFT or SURF are accurate and robust. However, its computational complexity is too high and not suitable for the real-time navigation localization task. This paper presents a new fast, accurate, robust stereo vision navigation localization method, based on a new developed ORB feature and descriptor. First, we presented our matching method based on ORB. Then, we obtained matching inliers and an initial motion estimation parameters using RANSAC and three points motion estimation method. Finally, nonlinear motion refinement method was used to polish the solution. Experimental results show that our method is robust, accurate and real-time.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Awad, Ali Ismail, et M. Hassaballah. « Bag-of-Visual-Words for Cattle Identification from Muzzle Print Images ». Applied Sciences 9, no 22 (15 novembre 2019) : 4914. http://dx.doi.org/10.3390/app9224914.

Texte intégral
Résumé :
Cattle, buffalo and cow identification plays an influential role in cattle traceability from birth to slaughter, understanding disease trajectories and large-scale cattle ownership management. Muzzle print images are considered discriminating cattle biometric identifiers for biometric-based cattle identification and traceability. This paper presents an exploration of the performance of the bag-of-visual-words (BoVW) approach in cattle identification using local invariant features extracted from a database of muzzle print images. Two local invariant feature detectors—namely, speeded-up robust features (SURF) and maximally stable extremal regions (MSER)—are used as feature extraction engines in the BoVW model. The performance evaluation criteria include several factors, namely, the identification accuracy, processing time and the number of features. The experimental work measures the performance of the BoVW model under a variable number of input muzzle print images in the training, validation, and testing phases. The identification accuracy values when utilizing the SURF feature detector and descriptor were 75%, 83%, 91%, and 93% for when 30%, 45%, 60%, and 75% of the database was used in the training phase, respectively. However, using MSER as a points-of-interest detector combined with the SURF descriptor achieved accuracies of 52%, 60%, 67%, and 67%, respectively, when applying the same training sizes. The research findings have proven the feasibility of deploying the BoVW paradigm in cattle identification using local invariant features extracted from muzzle print images.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Zhu, J. T., C. F. Gong, M. X. Zhao, L. Wang et Y. Luo. « IMAGE MOSAIC ALGORITHM BASED ON PCA-ORB FEATURE MATCHING ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (7 février 2020) : 83–89. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-83-2020.

Texte intégral
Résumé :
Abstract. In the process of image stitching, the ORB (Oriented FAST and Rotated BRIEF) algorithm lacks the characteristics of scale invariance and high mismatch rate. A principal component invariant feature transform (PCA-ORB, Principal Component Analysis- Oriented) is proposed. FAST and Rotated BRIEF) image stitching method. Firstly, the ORB algorithm is used to optimize the feature points to obtain the feature points with uniform distribution. Secondly, the principal component analysis (PCA) method can reduce the dimension of the traditional ORB feature descriptor and reduce the complexity of the feature point descriptor data. Thirdly, KNN (K-Nearest Neighbor) is used, and the k-nearest neighbor algorithm performs roughly matching on the feature points after dimensionality reduction. Then the random matching consistency algorithm (RANSAC, Random Sample Consensus) is used to remove the mismatched points. Finally, the fading and fading fusion algorithm is used to fuse the images. In 8 sets of simulation experiments, the image stitching speed is improved relative to the PCA-SIFT algorithm. The experimental results show that the proposed algorithm improves the image stitching speed under the premise of ensuring the quality of stitching, and can play a role in fast, real-time and large-scale applications, which are conducive to image fusion.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Jiang, Haili, Panpan Liu, Qingqing Yang, Liang Xu et Shuai Zhang. « A Fast Image Matching Method Based on Improved SURF ». Journal of Physics : Conference Series 2575, no 1 (1 août 2023) : 012002. http://dx.doi.org/10.1088/1742-6596/2575/1/012002.

Texte intégral
Résumé :
Abstract In order to solve the problems of low matching accuracy, slow speed and high system overhead in image matching methods, a rotation binary descriptor construction method based on Speed Up Robust Features (SURF) feature point detection is designed by using different Fast Library for Approximate Nearest Neighbors (FLANN) parameters and the filtering mechanism to screen out wrong matches according to the types of feature descriptors constructed in different feature extraction algorithms. This method ensures scale and rotation invariant while simplifying the representation of feature descriptors and speeding up the calculation speed in the initial stage of matching by combining the binary characteristics of descriptors. Finally, the Hamming distance is used as the filtering mechanism to improve the success rate of the final matching. The experimental results show that the accuracy of image matching is improved by 1.5% and the matching time is improved by 0.116s, while the robustness of the image to noise and rotation is ensured.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Bozorgi, Hamed, et Ali Jafari. « Fast uniform content-based satellite image registration using the scale-invariant feature transform descriptor ». Frontiers of Information Technology & ; Electronic Engineering 18, no 8 (août 2017) : 1108–16. http://dx.doi.org/10.1631/fitee.1500295.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Saleem, Sajid, et Abdul Bais. « Visible Spectrum and Infra-Red Image Matching : A New Method ». Applied Sciences 10, no 3 (9 février 2020) : 1162. http://dx.doi.org/10.3390/app10031162.

Texte intégral
Résumé :
Textural and intensity changes between Visible Spectrum (VS) and Infra-Red (IR) images degrade the performance of feature points. We propose a new method based on a regression technique to overcome this problem. The proposed method consists of three main steps. In the first step, feature points are detected from VS-IR images and Modified Normalized (MN)-Scale Invariant Feature Transform (SIFT) descriptors are computed. In the second step, correct MN-SIFT descriptor matches are identified between VS-IR images with projection error. A regression model is trained on correct MN-SIFT descriptors. In the third step, the regression model is used to process the MN-SIFT descriptors of test VS images in order to remove misalignment with the MN-SIFT descriptors of test IR images and to overcome textural and intensity changes. Experiments are performed on two different VS-IR image datasets. The experimental results show that the proposed method works really well and demonstrates on average 14% and 15% better precision and matching scores compared to recently proposed Histograms of Directional Maps (HoDM) descriptor.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Yu, Guorong, et Shuangming Zhao. « A New Feature Descriptor for Multimodal Image Registration Using Phase Congruency ». Sensors 20, no 18 (8 septembre 2020) : 5105. http://dx.doi.org/10.3390/s20185105.

Texte intégral
Résumé :
Images captured by different sensors with different spectral bands cause non-linear intensity changes between image pairs. Classic feature descriptors cannot handle this problem and are prone to yielding unsatisfactory results. Inspired by the illumination and contrast invariant properties of phase congruency, here, we propose a new descriptor to tackle this problem. The proposed descriptor generation mainly involves three steps. (1) Images are convolved with a bank of log-Gabor filters with different scales and orientations. (2) A window of fixed size is selected and divided into several blocks for each keypoint, and an oriented magnitude histogram and the orientation of the minimum moment of a phase congruency-based histogram are calculated in each block. (3) These two histograms are normalized respectively and concatenated to form the proposed descriptor. Performance evaluation experiments on three datasets were carried out to validate the superiority of the proposed method. Experimental results indicated that the proposed descriptor outperformed most of the classic and state-of-art descriptors in terms of precision and recall within an acceptable computational time.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Hwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi et Hwanmyeong Yeo. « Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors ». Holzforschung 76, no 1 (1 janvier 2021) : 1–13. http://dx.doi.org/10.1515/hf-2021-0051.

Texte intégral
Résumé :
Abstract This paper describes feature-based techniques for wood knot classification. For automated classification of macroscopic wood knot images, models were established using artificial neural networks with texture and local feature descriptors, and the performances of feature extraction algorithms were compared. Classification models trained with texture descriptors, gray-level co-occurrence matrix and local binary pattern, achieved better performance than those trained with local feature descriptors, scale-invariant feature transform and dense scale-invariant feature transform. Hence, it was confirmed that wood knot classification was more appropriate for texture classification rather than an approach based on morphological classification. The gray-level co-occurrence matrix produced the highest F1 score despite representing images with relatively low-dimensional feature vectors. The scale-invariant feature transform algorithm could not detect a sufficient number of features from the knot images; hence, the histogram of oriented gradients and dense scale-invariant feature transform algorithms that describe the entire image were better for wood knot classification. The artificial neural network model provided better classification performance than the support vector machine and k-nearest neighbor models, which suggests the suitability of the nonlinear classification model for wood knot classification.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Lu, Ying, Hui Qin Wang, Fei Xu et Wei Guang Liu. « The Feature Extraction and Matching Algorithm Based on the Fire Video Image Orientation ». Applied Mechanics and Materials 380-384 (août 2013) : 3986–89. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3986.

Texte intégral
Résumé :
Because the SIFT (scale invariant feature transform) algorithm can not accurately locate the flame shape features and computationally intensive, this article proposed a stereo video image fire flame matching method which is a combination of Harris corner and SIFT algorithm. Firstly, the algorithm extracts image feature points using Harris operator in Gaussian scale space and defines the main directions for each feature point, and then calculates the 32-dimensional feature vectors of each feature point descriptor and the Euclidean distance to match two images. Experimental results of image matching demonstrate that the new algorithm improves the significance of the shape of the extracted feature points and keep a better match rate of 96%. At the same time the time complexity is reduced by 27.8%. This algorithm has a certain practicality.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Li, Xin, Bin Feng, Sai Qiao, Haiyan Wei et Changli Feng. « SIFT-GVF-based lung edge correction method for correcting the lung region in CT images ». PLOS ONE 18, no 2 (28 février 2023) : e0282107. http://dx.doi.org/10.1371/journal.pone.0282107.

Texte intégral
Résumé :
Juxtapleural nodules were excluded from the segmented lung region in the Hounsfield unit threshold-based segmentation method. To re-include those regions in the lung region, a new approach was presented using scale-invariant feature transform and gradient vector flow models in this study. First, the scale-invariant feature transform method was utilized to detect all scale-invariant points in the binary lung region. The boundary points in the neighborhood of a scale-invariant point were collected to form the supportive boundary lines. Then, we utilized a Fourier descriptor to obtain a character representation of each supportive boundary line. Spectrum energy recognizes supportive boundaries that must be corrected. Third, the gradient vector flow-snake method was presented to correct the recognized supportive borders with a smooth profile curve, giving an ideal correction edge in those regions. Finally, the performance of the proposed method was evaluated through experiments on multiple authentic computed tomography images. The perfect results and robustness proved that the proposed method could correct the juxtapleural region precisely.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Tian, Ying, et De Bin Zhang. « Ear Recognition Based on Point Feature ». Applied Mechanics and Materials 380-384 (août 2013) : 3840–45. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3840.

Texte intégral
Résumé :
In order to improve recognition rate of human ear, a method based on point feature of image for ear recognition is proposed in this paper. Firstly force field transformation theory is applied to human ear image two times in our method. It can extract the structural feature points and contour feature points of ear respectively and compose feature point set. Then feature points described by the scale invariant feature transformation descriptor. At last nearest neighbor classifier is employed for ear recognition. Feature points extracted from ear image using force field transformation are stable, reliable and discriminative, and they are insensitive to variations in image resolution. Constructing descriptor can resolve the problems caused by lower recognition owing to illumination change, scaling transformation, rotation and minute alteration caused by pose transformation. The experimental results show that the proposed algorithm not only can effectively improve ear recognition rate but also has quite good robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Feng, Qinping, Shuping Tao, Chunyu Liu, Hongsong Qu et Wei Xu. « IFRAD : A Fast Feature Descriptor for Remote Sensing Images ». Remote Sensing 13, no 18 (20 septembre 2021) : 3774. http://dx.doi.org/10.3390/rs13183774.

Texte intégral
Résumé :
Feature description is a necessary process for implementing feature-based remote sensing applications. Due to the limited resources in satellite platforms and the considerable amount of image data, feature description—which is a process before feature matching—has to be fast and reliable. Currently, the state-of-the-art feature description methods are time-consuming as they need to quantitatively describe the detected features according to the surrounding gradients or pixels. Here, we propose a novel feature descriptor called Inter-Feature Relative Azimuth and Distance (IFRAD), which will describe a feature according to its relation to other features in an image. The IFRAD will be utilized after detecting some FAST-alike features: it first selects some stable features according to criteria, then calculates their relationships, such as their relative distances and azimuths, followed by describing the relationships according to some regulations, making them distinguishable while keeping affine-invariance to some extent. Finally, a special feature-similarity evaluator is designed to match features in two images. Compared with other state-of-the-art algorithms, the proposed method has significant improvements in computational efficiency at the expense of reasonable reductions in scale invariance.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie