Статті в журналах з теми "Scale Invariant Feature Descriptor"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Scale Invariant Feature Descriptor.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Scale Invariant Feature Descriptor".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Sahan, Ali Sahan, Nisreen Jabr, Ahmed Bahaaulddin, and Ali Al-Itb. "Human identification using finger knuckle features." International Journal of Advances in Soft Computing and its Applications 14, no. 1 (March 28, 2022): 88–101. http://dx.doi.org/10.15849/ijasca.220328.07.

Повний текст джерела
Анотація:
Abstract Many studies refer that the figure knuckle comprises unique features. Therefore, it can be utilized in a biometric system to distinguishing between the peoples. In this paper, a combined global and local features technique has been proposed based on two descriptors, namely: Chebyshev Fourier moments (CHFMs) and Scale Invariant Feature Transform (SIFT) descriptors. The CHFMs descriptor is used to gaining the global features, while the scale invariant feature transform descriptor is utilized to extract local features. Each one of these descriptors has its advantages; therefore, combining them together leads to produce distinct features. Many experiments have been carried out using IIT-Delhi knuckle database to assess the accuracy of the proposed approach. The analysis of the results of these extensive experiments implies that the suggested technique has gained 98% accuracy rate. Furthermore, the robustness against the noise has been evaluated. The results of these experiments lead to concluding that the proposed technique is robust against the noise variation. Keywords: finger knuckle, biometric system, Chebyshev Fourier moments, scale invariant feature transform, IIT-Delhi knuckle database.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Xinggui, Ping Yang, Bing Ran, Hao Xian, and Yong Liu. "Long-distance deformation object recognition by integrating contour structure and scale-invariant heat kernel signature." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 3241–57. http://dx.doi.org/10.3233/jifs-191649.

Повний текст джерела
Анотація:
The tough challenges of object recognition in long-distance scene involves contour shape deformation invariant features construction. In this work, an effective contour shape descriptor integrating critical points structure and Scale-invariant Heat Kernel Signature (SI-HKS) is proposed for long-distance object recognition. We firstly propose a general feature fusion model. Then, we capture the object contour structure feature with Critical-points Inner-distance Shape Context (CP-IDSC). Meanwhile, we pull-in the SI-HKS for capturing the local deformation-invariant properties of 2D shape. Based on the integration of the above two feature descriptors, the fusion descriptor is compacted by mapping into a low dimensional subspace using the bags-of-features, allowing for an efficient Bayesian classifier recognition. The extensive experiments on synthetic turbulence-degraded shapes and real-life infrared image show that the proposed method outperformed other compared approaches in terms of the recognition precision and robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Diaz-Escobar, Julia, Vitaly Kober, and Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform." Mathematical Problems in Engineering 2018 (October 28, 2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.

Повний текст джерела
Анотація:
Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

El Chakik, Abdallah, Abdul Rahman El Sayed, Hassan Alabboud, and Amer Bakkach. "An invariant descriptor map for 3D objects matching." International Journal of Engineering & Technology 9, no. 1 (January 23, 2020): 59. http://dx.doi.org/10.14419/ijet.v9i1.29918.

Повний текст джерела
Анотація:
Meshes and point clouds are traditionally used to represent and match 3D shapes. The matching prob-lem can be formulated as finding the best one-to-one correspondence between featured regions of two shapes. This paper presents an efficient and robust 3D matching method using vertices descriptors de-tection to define feature regions and an optimization approach for regions matching. To do so, we compute an invariant shape descriptor map based on 3D surface patches calculated using Zernike coef-ficients. Then, we propose a multi-scale descriptor map to improve the measured descriptor map quali-ty and to deal with noise. In addition, we introduce a linear algorithm for feature regions segmentation according to the descriptor map. Finally, the matching problem is modelled as sub-graph isomorphism problem, which is a combinatorial optimization problem to match feature regions while preserving the geometric. Finally, we show the robustness and stability of our method through many experimental re-sults with respect to scaling, noise, rotation, and translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ajayi, O. G. "PERFORMANCE ANALYSIS OF SELECTED FEATURE DESCRIPTORS USED FOR AUTOMATIC IMAGE REGISTRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 559–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-559-2020.

Повний текст джерела
Анотація:
Abstract. Automatic detection and extraction of corresponding features is very crucial in the development of an automatic image registration algorithm. Different feature descriptors have been developed and implemented in image registration and other disciplines. These descriptors affect the speed of feature extraction and the measure of extracted conjugate features, which affects the processing speed and overall accuracy of the registration scheme. This article is aimed at reviewing the performance of most-widely implemented feature descriptors in an automatic image registration scheme. Ten (10) descriptors were selected and analysed under seven (7) conditions viz: Invariance to rotation, scale and zoom, their robustness, repeatability, localization and efficiency using UAV acquired images. The analysis shows that though four (4) descriptors performed better than the other Six (6), no single feature descriptor can be affirmed to be the best, as different descriptors perform differently under different conditions. The Modified Harris and Stephen Corner Detector (MHCD) proved to be invariant to scale and zoom while it is excellent in robustness, repeatability, localization and efficiency, but it is variant to rotation. Also, the Scale Invariant feature Transform (SIFT), Speeded Up Robust Features (SURF) and the Maximally Stable Extremal Region (MSER) algorithms proved to be invariant to scale, zoom and rotation, and very good in terms of repeatability, localization and efficiency, though MSER proved to be not as robust as SIFT and SURF. The implication of the findings of this research is that the choice of feature descriptors must be informed by the imaging conditions of the image registration analysts.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tikhomirova, T. A., G. T. Fedorenko, K. M. Nazarenko, and E. S. Nazarenko. "LEFT: LOCAL EDGE FEATURES TRANSFORM." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 189 (March 2020): 11–18. http://dx.doi.org/10.14489/vkit.2020.03.pp.011-018.

Повний текст джерела
Анотація:
To detect point correspondence between images or 3D scenes, local texture descriptors, such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-Up Robust Features), BRIEF (Binary Robust Independent Elementary Features), and others, are usually used. Formally they provide invariance to image rotation and scale, but this properties are achieved only approximately due to discrete number of evaluable orientations and scales stored into the descriptor. Feature points preferable for such descriptors usually are not belong to actual object boundaries into 3D scenes and so are hard to be used into apipolar relationships. At the same time, linking the feature point to large-scale lines and edges is preferable for SLAM (Simultaneous Localization And Mapping) tasks, because their appearance are the most resistible to daily, seasonal and weather variations.In this paper, original feature points descriptor LEFT (Local Edge Features Transform) for edge images are proposed. LEFT accumulate directions and contrasts of alternative strait segments tangent to lines and edges in the vicinity of feature points. Due to this structure, mutual orientation of LEFT descriptors are evaluated and taken into account directly at the stage of their comparison. LEFT descriptors adapt to the shape of contours in the vicinity of feature points, so they can be used to analyze local and global geometric distortions of a various nature. The article presents the results of comparative testing of LEFT and common texture-based descriptors and considers alternative ways of representing them in a computer vision system.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Wanyuan, Tian Zhou, Chao Xu, and Meiqin Liu. "A SIFT-Like Feature Detector and Descriptor for Multibeam Sonar Imaging." Journal of Sensors 2021 (July 15, 2021): 1–14. http://dx.doi.org/10.1155/2021/8845814.

Повний текст джерела
Анотація:
Multibeam imaging sonar has become an increasingly important tool in the field of underwater object detection and description. In recent years, the scale-invariant feature transform (SIFT) algorithm has been widely adopted to obtain stable features of objects in sonar images but does not perform well on multibeam sonar images due to its sensitivity to speckle noise. In this paper, we introduce MBS-SIFT, a SIFT-like feature detector and descriptor for multibeam sonar images. This algorithm contains a feature detector followed by a local feature descriptor. A new gradient definition robust to speckle noise is presented to detect extrema in scale space, and then, interest points are filtered and located. It is also used to assign orientation and generate descriptors of interest points. Simulations and experiments demonstrate that the proposed method can capture features of underwater objects more accurately than existing approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gao, Junchai, and Zhen Sun. "An Improved ASIFT Image Feature Matching Algorithm Based on POS Information." Sensors 22, no. 20 (October 12, 2022): 7749. http://dx.doi.org/10.3390/s22207749.

Повний текст джерела
Анотація:
The affine scale-invariant feature transform (ASIFT) algorithm is a feature extraction algorithm with affinity and scale invariance, which is suitable for image feature matching using unmanned aerial vehicles (UAVs). However, there are many problems in the matching process, such as the low efficiency and mismatching. In order to improve the matching efficiency, this algorithm firstly simulates image distortion based on the position and orientation system (POS) information from real-time UAV measurements to reduce the number of simulated images. Then, the scale-invariant feature transform (SIFT) algorithm is used for feature point detection, and the extracted feature points are combined with the binary robust invariant scalable keypoints (BRISK) descriptor to generate the binary feature descriptor, which is matched using the Hamming distance. Finally, in order to improve the matching accuracy of the UAV images, based on the random sample consensus (RANSAC) a false matching eliminated algorithm is proposed. Through four groups of experiments, the proposed algorithm is compared with the SIFT and ASIFT. The results show that the algorithm can optimize the matching effect and improve the matching speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Barajas-García, Carolina, Selene Solorza-Calderón, and Everardo Gutiérrez-López. "Scale, translation and rotation invariant Wavelet Local Feature Descriptor." Applied Mathematics and Computation 363 (December 2019): 124594. http://dx.doi.org/10.1016/j.amc.2019.124594.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

FERRAZ, CAROLINA TOLEDO, OSMANDO PEREIRA, MARCOS VERDINI ROSA, and ADILSON GONZAGA. "OBJECT RECOGNITION BASED ON BAG OF FEATURES AND A NEW LOCAL PATTERN DESCRIPTOR." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 08 (December 2014): 1455010. http://dx.doi.org/10.1142/s0218001414550106.

Повний текст джерела
Анотація:
Bag of Features (BoF) has gained a lot of interest in computer vision. Visual codebook based on robust appearance descriptors extracted from local image patches is an effective means of texture analysis and scene classification. This paper presents a new method for local feature description based on gray-level difference mapping called Mean Local Mapped Pattern (M-LMP). The proposed descriptor is robust to image scaling, rotation, illumination and partial viewpoint changes. The training set is composed of rotated and scaled images, with changes in illumination and view points. The test set is composed of rotated and scaled images. The proposed descriptor more effectively captures smaller differences of the image pixels than similar ones. In our experiments, we implemented an object recognition system based on the M-LMP and compared our results to the Center-Symmetric Local Binary Pattern (CS-LBP) and the Scale-Invariant Feature Transform (SIFT). The results for object classification were analyzed in a BoF methodology and show that our descriptor performs better compared to these two previously published methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Liu, Yazhou, Pongsak Lasang, Mel Siegel, and Quansen Sun. "Multi-sparse descriptor: A scale invariant feature for pedestrian detection." Neurocomputing 184 (April 2016): 55–65. http://dx.doi.org/10.1016/j.neucom.2015.07.143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Qu, Xiujie, Fei Zhao, Mengzhe Zhou, and Haili Huo. "A Novel Fast and Robust Binary Affine Invariant Descriptor for Image Matching." Mathematical Problems in Engineering 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/129230.

Повний текст джерела
Анотація:
As the current binary descriptors have disadvantages of high computational complexity, no affine invariance, and the high false matching rate with viewpoint changes, a new binary affine invariant descriptor, called BAND, is proposed. Different from other descriptors, BAND has an irregular pattern, which is based on local affine invariant region surrounding a feature point, and it has five orientations, which are obtained by LBP effectively. Ultimately, a 256 bits binary string is computed by simple random sampling pattern. Experimental results demonstrate that BAND has a good matching result in the conditions of rotating, image zooming, noising, lighting, and small-scale perspective transformation. It has better matching performance compared with current mainstream descriptors, while it costs less time.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Sheng Ke, Lili Liu, and Xiaowei Xu. "Vehicle Logo Recognition Based on Local Feature Descriptor." Applied Mechanics and Materials 263-266 (December 2012): 2418–21. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2418.

Повний текст джерела
Анотація:
In this paper, we present a comparison of the scale-invariant feature transforms (SIFT)-based feature-matching scheme and the speeded up robust features (SURF)-based feature-matching scheme in the field of vehicle logo recognition. We capture a set of logo images which are varied in illumination, blur, scale, and rotation. Six kinds of vehicle logo training set are formed using 25 images in average and the rest images are used to form the testing set. The Logo Recognition system that we programmed indicates a high recognition rate of the same kind of query images through adjusting different parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Thirthe Gowda, M. T., and J. Chandrika. "Optimized Scale-Invariant Hog Descriptors for Tobacco Plant Detection." WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT 17 (July 23, 2021): 787–94. http://dx.doi.org/10.37394/232015.2021.17.74.

Повний текст джерела
Анотація:
The histogram of gradient (HOG) descriptor is being employed in this research work to demonstrate the technique of scale variant to identify the plant in surveillance videos. In few scenarios, the discrepancies in the histogram of gradient descriptors along with scale as well as variation in illumination are considered as one of the major hindrances. This research work introduces a unique SIO-HOG descriptor that is approximated to be scale-invariant. With the help of the footage that is captured from the tobacco plant identification process, the system can integrate adoptive bin selections as well as sample resizing. Further, this research work explores the impact of a PCA transform that is based on the process of feature selection on the performance of overall recognition and thereby considering finite scale range, adoptive orientation binning in non-overlapping descriptors, as well as finite scale range are all essential for a high detection rate. The feature vector of HOG over a complete search window is computationally intensive. However, suitable frameworks for classification can be developed by maintaining a precise range of attributes with finite Euclidean distance. Experimental results prove that the proposed approach for detecting tobacco from other weeds has resulted in an improved detection rate. And finally, the robustness of the complete plant detection system was evaluated on a video sequence with different non-linearity's that is quite common in a real-world environment and its performance metrics are evaluated
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sicong, Yue, Wang Qing, and Zhao Rongchun. "Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor." Chinese Journal of Aeronautics 22, no. 1 (February 2009): 70–74. http://dx.doi.org/10.1016/s1000-9361(08)60070-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Cui, Song, Miaozhong Xu, Ailong Ma, and Yanfei Zhong. "Modality-Free Feature Detector and Descriptor for Multimodal Remote Sensing Image Registration." Remote Sensing 12, no. 18 (September 10, 2020): 2937. http://dx.doi.org/10.3390/rs12182937.

Повний текст джерела
Анотація:
The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear mapping of the corresponding features of the multimodal images often results in failure of the feature matching, as well as the image registration. In this paper, a modality-free multimodal remote sensing image registration method (SRIFT) is proposed for the registration of multimodal remote sensing images, which is invariant to scale, radiation, and rotation. In SRIFT, the nonlinear diffusion scale (NDS) space is first established to construct a multi-scale space. A local orientation and scale phase congruency (LOSPC) algorithm are then used so that the features of the images with NRD are mapped to establish a one-to-one correspondence, to obtain sufficiently stable key points. In the feature description stage, a rotation-invariant coordinate (RIC) system is adopted to build a descriptor, without requiring estimation of the main direction. The experiments undertaken in this study included one set of simulated data experiments and nine groups of experiments with different types of real multimodal remote sensing images with rotation and scale differences (including synthetic aperture radar (SAR)/optical, digital surface model (DSM)/optical, light detection and ranging (LiDAR) intensity/optical, near-infrared (NIR)/optical, short-wave infrared (SWIR)/optical, classification/optical, and map/optical image pairs), to test the proposed algorithm from both quantitative and qualitative aspects. The experimental results showed that the proposed method has strong robustness to NRD, being invariant to scale, radiation, and rotation, and the achieved registration precision was better than that of the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Dong, Chuan-Zhi, and F. Necati Catbas. "A non-target structural displacement measurement method using advanced feature matching strategy." Advances in Structural Engineering 22, no. 16 (June 11, 2019): 3461–72. http://dx.doi.org/10.1177/1369433219856171.

Повний текст джерела
Анотація:
Most of the existing vision-based displacement measurement methods require manual speckles or targets to improve the measurement performance in non-stationary imagery environments. To minimize the use of manual speckles and targets, feature points regarded as virtual markers can be utilized for non-target measurement. In this study, an advanced feature matching strategy is presented, which replaces the handcrafted descriptors with learned descriptors called Visual Geometry Group, of the University of Oxford descriptors to achieve better performance. The feasibility and performance of the proposed method is verified by comparative studies with a laboratory experiment on a two-span bridge model and then with a field application on a railway bridge. The proposed approach of integrated use of Scale Invariant Feature Transform and Visual Geometry Group improved the measurement accuracy by about 24% when compared with the commonly used existing feature matching-based displacement measurement method using Scale Invariant Feature Transform feature and descriptor.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yu, Meng, Dong Zhang, Dah-Jye Lee, and Alok Desai. "SR-SYBA: A Scale and Rotation Invariant Synthetic Basis Feature Descriptor with Low Memory Usage." Electronics 9, no. 5 (May 15, 2020): 810. http://dx.doi.org/10.3390/electronics9050810.

Повний текст джерела
Анотація:
Feature description has an important role in image matching and is widely used for a variety of computer vision applications. As an efficient synthetic basis feature descriptor, SYnthetic BAsis (SYBA) requires low computational complexity and provides accurate matching results. However, the number of matched feature points generated by SYBA suffers from large image scaling and rotation variations. In this paper, we improve SYBA’s scale and rotation invariance by adding an efficient pre-processing operation. The proposed algorithm, SR-SYBA, represents the scale of the feature region with the location of maximum gradient response along the radial direction in Log-polar coordinate system. Based on this scale representation, it normalizes all feature regions to the same reference scale to provide scale invariance. The orientation of the feature region is represented as the orientation of the vector from the center of the feature region to its intensity centroid. Based on this orientation representation, all feature regions are rotated to the same reference orientation to provide rotation invariance. The original SYBA descriptor is then applied to the scale and orientation normalized feature regions for description and matching. Experiment results show that SR-SYBA greatly improves SYBA for image matching applications with scaling and rotation variations. SR-SYBA obtains comparable or better performance in terms of matching rate compared to the mainstream algorithms while still maintains its advantages of using much less storage and simpler computations. SR-SYBA is applied to a vision-based measurement application to demonstrate its performance for image matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

KITAGAWA, Masamichi, and Ikuko SHIMIZU. "Memory Saving Feature Descriptor Using Scale and Rotation Invariant Patches around the Feature Ppoints." IEICE Transactions on Information and Systems E102.D, no. 5 (May 1, 2019): 1106–10. http://dx.doi.org/10.1587/transinf.2018edl8176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

GENG, LICHUAN, SONGZHI SU, DONGLIN CAO, and SHAOZI LI. "PERSPECTIVE-INVARIANT IMAGE MATCHING FRAMEWORK WITH BINARY FEATURE DESCRIPTOR AND APSO." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 08 (December 2014): 1455011. http://dx.doi.org/10.1142/s0218001414550118.

Повний текст джерела
Анотація:
A novel perspective invariant image matching framework is proposed in this paper, noted as Perspective-Invariant Binary Robust Independent Elementary Features (PBRIEF). First, we use the homographic transformation to simulate the distortion between two corresponding patches around the feature points. Then, binary descriptors are constructed by comparing the intensity of sample points surrounding the feature location. We transform the location of the sample points with simulated homographic matrices. This operation is to ensure that the intensities which we compared are the realistic corresponding pixels between two image patches. Since the exact perspective transform matrix is unknown, an Adaptive Particle Swarm Optimization (APSO) algorithm-based iterative procedure is proposed to estimate the real transformation angles. Experimental results obtained on five different datasets show that PBRIEF outperforms significantly the existing methods on images with large viewpoint difference. Moreover, the efficiency of our framework is also improved comparing with Affine-Scale Invariant Feature Transform (ASIFT).
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Hua-Zhen, Dong-Won Kim, Tae-Koo Kang, and Myo-Taeg Lim. "MIFT: A Moment-Based Local Feature Extraction Algorithm." Applied Sciences 9, no. 7 (April 11, 2019): 1503. http://dx.doi.org/10.3390/app9071503.

Повний текст джерела
Анотація:
We propose a local feature descriptor based on moment. Although conventional scale invariant feature transform (SIFT)-based algorithms generally use difference of Gaussian (DoG) for feature extraction, they remain sensitive to more complicated deformations. To solve this problem, we propose MIFT, an invariant feature transform algorithm based on the modified discrete Gaussian-Hermite moment (MDGHM). Taking advantage of MDGHM’s high performance to represent image information, MIFT uses an MDGHM-based pyramid for feature extraction, which can extract more distinctive extrema than the DoG, and MDGHM-based magnitude and orientation for feature description. We compared the proposed MIFT method performance with current best practice methods for six image deformation types, and confirmed that MIFT matching accuracy was superior of other SIFT-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Anbarasu, B., and G. Anitha. "Indoor Scene recognition for Micro Aerial Vehicles Navigation using Enhanced SIFT-ScSPM Descriptors." Journal of Navigation 73, no. 1 (July 5, 2019): 37–55. http://dx.doi.org/10.1017/s0373463319000420.

Повний текст джерела
Анотація:
In this paper, a new scene recognition visual descriptor called Enhanced Scale Invariant Feature Transform-based Sparse coding Spatial Pyramid Matching (Enhanced SIFT-ScSPM) descriptor is proposed by combining a Bag of Words (BOW)-based visual descriptor (SIFT-ScSPM) and Gist-based descriptors (Enhanced Gist-Enhanced multichannel Gist (Enhanced mGist)). Indoor scene classification is carried out by multi-class linear and non-linear Support Vector Machine (SVM) classifiers. Feature extraction methodology and critical review of several visual descriptors used for indoor scene recognition in terms of experimental perspectives have been discussed in this paper. An empirical study is conducted on the Massachusetts Institute of Technology (MIT) 67 indoor scene classification data set and assessed the classification accuracy of state-of-the-art visual descriptors and the proposed Enhanced mGist, Speeded Up Robust Features-Spatial Pyramid Matching (SURF-SPM) and Enhanced SIFT-ScSPM visual descriptors. Experimental results show that the proposed Enhanced SIFT-ScSPM visual descriptor performs better with higher classification rate, precision, recall and area under the Receiver Operating Characteristic (ROC) curve values with respect to the state-of-the-art and the proposed Enhanced mGist and SURF-SPM visual descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Yan Wei, and Hui Li Yu. "Image Registration Method Based on PCA-SIFT Feature Detection." Advanced Materials Research 712-715 (June 2013): 2395–98. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2395.

Повний текст джерела
Анотація:
SIFT (scale invariant key points) which can well handle images with varying orientation and zoom, is widely used in image registration, but the algorithm is complexity and the processing time is too long. Therefore we used the PCA-SIFT (Principle Components Analysis-scale invariant key points) in image registration. Compared the SIFT descriptor, the PCA-SIFT reduced the dimensions of SIFT feature, enhanced the matching accuracy and reduce the elapsed time. Then the mutual information method used in this paper to estimate the best points. Experimental results show that PCA-SIFT algorithm is simplified, robust and liable.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zhang, Jiaming, Xuejuan Hu, Tan Zhang, Shiqian Liu, Kai Hu, Ting He, Xiaokun Yang, et al. "Binary Neighborhood Coordinate Descriptor for Circuit Board Defect Detection." Electronics 12, no. 6 (March 17, 2023): 1435. http://dx.doi.org/10.3390/electronics12061435.

Повний текст джерела
Анотація:
Due to the periodicity of circuit boards, the registration algorithm based on keypoints is less robust in circuit board detection and is prone to misregistration problems. In this paper, the binary neighborhood coordinate descriptor (BNCD) is proposed and applied to circuit board image registration. The BNCD consists of three parts: neighborhood description, coordinate description, and brightness description. The neighborhood description contains the grayscale information of the neighborhood, which is the main part of BNCD. The coordinate description introduces the actual position of the keypoints in the image, which solves the problem of inter-period matching of keypoints. The brightness description introduces the concept of bright and dark points, which improves the distinguishability of BNCD and reduces the calculation amount of matching. Experimental results show that in circuit board image registration, the matching precision rate and recall rate of BNCD is better than that of classic algorithms such as scale-invariant feature transform (SIFT) and speeded up robust features (SURF), and the calculation of descriptors takes less time.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Holliday, Andrew, and Gregory Dudek. "Scale-invariant localization using quasi-semantic object landmarks." Autonomous Robots 45, no. 3 (February 25, 2021): 407–20. http://dx.doi.org/10.1007/s10514-021-09973-w.

Повний текст джерела
Анотація:
AbstractThis work presents Object Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. An Object Landmark consists of a bounding box $${\mathbf {b}}$$ b defining an object, a descriptor $${\mathbf {q}}$$ q of that object produced by a Convolutional Neural Network, and a set of classical point features within $${\mathbf {b}}$$ b . We evaluate Object Landmarks on visual odometry and place-recognition tasks, and compare them against several modern approaches. We find that Object Landmarks enable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Dong, Yunyun, Weili Jiao, Tengfei Long, Lanfa Liu, Guojin He, Chengjuan Gong, and Yantao Guo. "Local Deep Descriptor for Remote Sensing Image Feature Matching." Remote Sensing 11, no. 4 (February 19, 2019): 430. http://dx.doi.org/10.3390/rs11040430.

Повний текст джерела
Анотація:
Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is a vital process to determine the quality of transform model. While in the process of feature matching, the quality of feature descriptor determines the matching result directly. At present, the most commonly used descriptor is hand-crafted by the designer’s expertise or intuition. However, it is hard to cover all the different cases, especially for remote sensing images with nonlinear grayscale deformation. Recently, deep learning shows explosive growth and improves the performance of tasks in various fields, especially in the computer vision community. Here, we created remote sensing image training patch samples, named Invar-Dataset in a novel and automatic way, then trained a deep learning convolutional neural network, named DescNet to generate a robust feature descriptor for feature matching. A special experiment was carried out to illustrate that our created training dataset was more helpful to train a network to generate a good feature descriptor. A qualitative experiment was then performed to show that feature descriptor vector learned by the DescNet could be used to register remote sensing images with large gray scale difference successfully. A quantitative experiment was then carried out to illustrate that the feature vector generated by the DescNet could acquire more matched points than those generated by hand-crafted feature Scale Invariant Feature Transform (SIFT) descriptor and other networks. On average, the matched points acquired by DescNet was almost twice those acquired by other methods. Finally, we analyzed the advantages of our created training dataset Invar-Dataset and DescNet and gave the possible development of training deep descriptor network.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Li, Jing, and Tao Yang. "Efficient and Robust Feature Matching via Local Descriptor Generalized Hough Transform." Applied Mechanics and Materials 373-375 (August 2013): 536–40. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.536.

Повний текст джерела
Анотація:
Robust and efficient indistinctive feature matching and outliers removal is an essential problem in many computer vision applications. In this paper we present a simple and fast algorithm named as LDGTH (Local Descriptor Generalized Hough Transform) to handle this problem. The main characteristics of the proposed method include: (1) A novel local descriptor generalized hough transform framework is presented in which the local geometric characteristics of invariant feature descriptors are fused together as a global constraint for feature correspondence verification. (2) Different from standard generalized hough transform, our approach greatly reduces the computational and storage requirements of parameter space through taking advantage of the invariant feature correspondences. (3) The proposed algorithm can be seamlessly embedded into the existing image matching framework, and significantly improve the image matching performance both in speed and robustness in challenge conditions. In the experiment we use both synthetic image data and real world data with high outliers ratio and severe changes in view point, scale, illumination, image blur, compression and noises to evaluate the proposed method, and the results demonstrate that our approach achieves achieves faster and better matching performance compared to the traditional algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hagiwara, Hayato, Yasufumi Touma, Kenichi Asami, and Mochimitsu Komori. "FPGA-Based Stereo Vision System Using Gradient Feature Correspondence." Journal of Robotics and Mechatronics 27, no. 6 (December 18, 2015): 681–90. http://dx.doi.org/10.20965/jrm.2015.p0681.

Повний текст джерела
Анотація:
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/10.jpg"" width=""300"" /> Mobile robot with a stereo vision</div>This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Guo, Sheng, Weilin Huang, and Yu Qiao. "Improving scale invariant feature transform with local color contrastive descriptor for image classification." Journal of Electronic Imaging 26, no. 1 (February 15, 2017): 013015. http://dx.doi.org/10.1117/1.jei.26.1.013015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Daoud, Luka, Muhammad Kamran Latif, H. S. Jacinto, and Nader Rafla. "A fully pipelined FPGA accelerator for scale invariant feature transform keypoint descriptor matching." Microprocessors and Microsystems 72 (February 2020): 102919. http://dx.doi.org/10.1016/j.micpro.2019.102919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

S. Sathiya, Devi. "Texture classification with modified rotation invariant local binary pattern and gradient boosting." International Journal of Knowledge-based and Intelligent Engineering Systems 26, no. 2 (September 29, 2022): 125–36. http://dx.doi.org/10.3233/kes220012.

Повний текст джерела
Анотація:
Since texture is prominent low level feature of an image, most of the image processing and computer vision applications rely on this feature for efficient extraction, retrieval, visualization and classification of the images. Hence, the texture analysis method mainly concentrates on efficient feature extraction and representation of the image. The images captured and analyzed in many of the applications are not in same (or) similar scale, orientation and illumination and also texture has regular, stochastic, periodic, homogeneous (or) inhomogeneous and directional in nature. To address these issues, recent texture analysis method focused on efficient and invariant feature extraction and representation with reduced dimension. Hence this paper proposes a invariant texture descriptor, Locality preserving Rotation Invariant Modified Directional Local Binary Pattern (LRIMDLBP) based on LBP. The classical LBP descriptor is widely used in most of the texture analysis applications due to its simplicity and robustness to illumination changes. However, it does not efficiently capture the discriminative texture information because it uses sign information and ignores the magnitude value of the neighborhood and also suffers from high dimensionality. Hence to improve the performance of LBP, many variants are proposed. Though most of these LBP variants are either geometrical or direction invariant, fails to address the spatial locality and contrast invariance. To address these issues, the proposed LRIMDLBP incorporates spatial locality, contrast and direction information for rotation invariant texture descriptor with reduced dimension. The proposed LRIMDLBP consists of 5 phases: (i) Reference point identification, (ii) Magnitude calculation, (iii) Binary Label computation based on threshold, (iv) Pattern identification in dominant direction and (v) LRIMDLBP code computation. The locality and rotation invariance is achieved by identifying and using reference point in a local neighborhood. The reference point is a dominant pixel whose magnitude is large in the neighborhood excluding center pixel. The spatial locality and rotation invariance is achieved by preserving the weights of LBP dynamically based on the reference point. The proposed method also preserves the direction information of the texture by comparing the magnitude of the pixel in the four dominant directions such as horizontal, vertical, diagonal and anti-diagonal directions. Finally the proposed invariant LRIMDLBP descriptor computes histogram based on decimal pattern value. The proposed LRIMDLBP descriptor results in texture feature with reduced dimension when compared to other LBP variants. The performance of the proposed descriptor is evaluated with large and well known four bench mark texture datasets namely (i) CUReT, (ii) Outex, (iii) KTS-TIPS and (iv) UIUC against three classifiers such as (i). K-Nearest Neighbor (K-NN), (ii). Support Vector Machine (SVM) with Radial Basis Function (RBF) and (iii). Gradient Boosting Classifier (GBC). The intensive experimental result shows that the ensemble based GBC yields superior classification accuracy of 99.38%, 99.43%, 98.67% and 98.82% for the datasets CUReT, Outex, KTH-TIPS and UIUC respectively when compared with other two classifiers and also improves the generalization ability. The proposed LRIMDLBP descriptor achieves approximately 15% more classification accuracy when compared with traditional LBP and also produces 1% to 2.5% more classification accuracy compared with other state of the art LBP variants.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Gafour, Yacine, and Djamel Berrabah. "New Approach to Improve the Classification Process of Multi-Class Objects." International Journal of Organizational and Collective Intelligence 10, no. 2 (April 2020): 1–19. http://dx.doi.org/10.4018/ijoci.2020040101.

Повний текст джерела
Анотація:
In recent years, several descriptors have been proposed in many image classification applications. Accelerated-KAZE (A-KAZE) is considered one of the descriptors that has shown high performance for feature extraction. A-KAZE uses a binary descriptor called modified-local difference binary, which is very efficient and invariant to changes in rotation and scale. This representation does not allow spatial information to be considered between objects in the image, which makes it possible to reduce the performances of the classification of the images. This article broaches a new approach to improve the performance of the A-KAZE descriptor for image classification. The authors first establish the connection between the A-KAZE descriptor and the bag of feature model. Then the Spatial Pyramid Matching (SPM) is adopted by exploiting the A-KAZE descriptor to reinforce its robustness by introducing spatial information. The results of the experiments on several datasets show that the A-KAZE descriptor with SPM gives very satisfactory results compared with other existing methods in the state of the art.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ding, Can, Chang Wen Qu, and Feng Su. "An Improved SIFT Matching Algorithm." Applied Mechanics and Materials 239-240 (December 2012): 1232–37. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.1232.

Повний текст джерела
Анотація:
The high dimension and complexity of feature descriptor of Scale Invariant Feature Transform (SIFT), not only occupy the memory spaces, but also influence the speed of feature matching. We adopt the statistic feature point’s neighbor gradient method, the local statistic area is constructed by 8 concentric square ring feature of points-centered, compute gradient of these pixels, and statistic gradient accumulated value of 8 directions, and then descending sort them, at last normalize them. The new feature descriptor descend dimension of feature from 128 to 64, the proposed method can improve matching speed and keep matching precision at the same time.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Li, Ruoxiang, Dianxi Shi, Yongjun Zhang, Ruihao Li, and Mingkun Wang. "Asynchronous event feature generation and tracking based on gradient descriptor for event cameras." International Journal of Advanced Robotic Systems 18, no. 4 (July 1, 2021): 172988142110270. http://dx.doi.org/10.1177/17298814211027028.

Повний текст джерела
Анотація:
Recently, the event camera has become a popular and promising vision sensor in the research of simultaneous localization and mapping and computer vision owing to its advantages: low latency, high dynamic range, and high temporal resolution. As a basic part of the feature-based SLAM system, the feature tracking method using event cameras is still an open question. In this article, we present a novel asynchronous event feature generation and tracking algorithm operating directly on event-streams to fully utilize the natural asynchronism of event cameras. The proposed algorithm consists of an event-corner detection unit, a descriptor construction unit, and an event feature tracking unit. The event-corner detection unit addresses a fast and asynchronous corner detector to extract event-corners from event-streams. For the descriptor construction unit, we propose a novel asynchronous gradient descriptor inspired by the scale-invariant feature transform descriptor, which helps to achieve quantitative measurement of similarity between event feature pairs. The construction of the gradient descriptor can be decomposed into three stages: speed-invariant time surface maintenance and extraction, principal orientation calculation, and descriptor generation. The event feature tracking unit combines the constructed gradient descriptor and an event feature matching method to achieve asynchronous feature tracking. We implement the proposed algorithm in C++ and evaluate it on a public event dataset. The experimental results show that our proposed method achieves improvement in terms of tracking accuracy and real-time performance when compared with the state-of-the-art asynchronous event-corner tracker and with no compromise on the feature tracking lifetime.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kuo, Chien-Hung, Erh-Hsu Huang, Chiang-Heng Chien, and Chen-Chien Hsu. "FPGA Design of Enhanced Scale-Invariant Feature Transform with Finite-Area Parallel Feature Matching for Stereo Vision." Electronics 10, no. 14 (July 8, 2021): 1632. http://dx.doi.org/10.3390/electronics10141632.

Повний текст джерела
Анотація:
In this paper, we propose an FPGA-based enhanced-SIFT with feature matching for stereo vision. Gaussian blur and difference of Gaussian pyramids are realized in parallel to accelerate the processing time required for multiple convolutions. As for the feature descriptor, a simple triangular identification approach with a look-up table is proposed to efficiently determine the direction and gradient of the feature points. Thus, the dimension of the feature descriptor in this paper is reduced by half compared to conventional approaches. As far as feature detection is concerned, the condition for high-contrast detection is simplified by moderately changing a threshold value, which also benefits the reduction of the resulting hardware in realization. The proposed enhanced-SIFT not only accelerates the operational speed but also reduces the hardware cost. The experiment results show that the proposed enhanced-SIFT reaches a frame rate of 205 fps for 640 × 480 images. Integrated with two enhanced-SIFT, a finite-area parallel checking is also proposed without the aid of external memory to improve the efficiency of feature matching. The resulting frame rate by the proposed stereo vision matching can be as high as 181 fps with good matching accuracy as demonstrated in the experimental results.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Hasheminasab, M., H. Ebadi, and A. Sedaghat. "AN INTEGRATED RANSAC AND GRAPH BASED MISMATCH ELIMINATION APPROACH FOR WIDE-BASELINE IMAGE MATCHING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (December 11, 2015): 297–300. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-297-2015.

Повний текст джерела
Анотація:
In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Yan Wei, Si Qing Zhang, Bing Lin, Hong Liang, and Yan Ming Pan. "Feature Point Extraction Method of X-Ray Image Based on Scale Invariant." Applied Mechanics and Materials 274 (January 2013): 667–70. http://dx.doi.org/10.4028/www.scientific.net/amm.274.667.

Повний текст джерела
Анотація:
Feature Point Extraction Method of X-ray Image Based on Scale Invariant is proposed in this paper for industrial X-ray image with low contrast and some artifacts. First of all, the scale transformation of original image is adopted by the Gaussian kernel to building the DOG multi-scale pyramid. Then, the location and scale of the key points is fixed by the three-dimensional quadratic function. Finally, the Simply SIFT descriptor illustrates the key points. Experimental results show that the algorithm has good stability in translation, rotation and affine transformation, especially with 10 percent normalized Gaussian noise, this algorithm can still be detected feature points accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Lei, Chunhong Chang, Zhouqi Liu, Jin Huang, Cong Liu, and Chunxiang Liu. "A Medical Image Fusion Method Based on SIFT and Deep Convolutional Neural Network in the SIST Domain." Journal of Healthcare Engineering 2021 (April 21, 2021): 1–8. http://dx.doi.org/10.1155/2021/9958017.

Повний текст джерела
Анотація:
The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Xu, Yun Xi, and Fang Chen. "Real-Time and Robust Stereo Visual Navigation Localization Algorithm Based on ORB." Applied Mechanics and Materials 241-244 (December 2012): 478–82. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.478.

Повний текст джерела
Анотація:
The biggest challenge of visual navigation localization is feature extraction and association. Currently, the most widely used method is simple corner feature and simple matching strategy based on SAD or NCC. Another option is scale invariant feature and rotation invariant descriptor, typically as SIFT, SURF. Feature extraction and matching methods based on the SIFT or SURF are accurate and robust. However, its computational complexity is too high and not suitable for the real-time navigation localization task. This paper presents a new fast, accurate, robust stereo vision navigation localization method, based on a new developed ORB feature and descriptor. First, we presented our matching method based on ORB. Then, we obtained matching inliers and an initial motion estimation parameters using RANSAC and three points motion estimation method. Finally, nonlinear motion refinement method was used to polish the solution. Experimental results show that our method is robust, accurate and real-time.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Awad, Ali Ismail, and M. Hassaballah. "Bag-of-Visual-Words for Cattle Identification from Muzzle Print Images." Applied Sciences 9, no. 22 (November 15, 2019): 4914. http://dx.doi.org/10.3390/app9224914.

Повний текст джерела
Анотація:
Cattle, buffalo and cow identification plays an influential role in cattle traceability from birth to slaughter, understanding disease trajectories and large-scale cattle ownership management. Muzzle print images are considered discriminating cattle biometric identifiers for biometric-based cattle identification and traceability. This paper presents an exploration of the performance of the bag-of-visual-words (BoVW) approach in cattle identification using local invariant features extracted from a database of muzzle print images. Two local invariant feature detectors—namely, speeded-up robust features (SURF) and maximally stable extremal regions (MSER)—are used as feature extraction engines in the BoVW model. The performance evaluation criteria include several factors, namely, the identification accuracy, processing time and the number of features. The experimental work measures the performance of the BoVW model under a variable number of input muzzle print images in the training, validation, and testing phases. The identification accuracy values when utilizing the SURF feature detector and descriptor were 75%, 83%, 91%, and 93% for when 30%, 45%, 60%, and 75% of the database was used in the training phase, respectively. However, using MSER as a points-of-interest detector combined with the SURF descriptor achieved accuracies of 52%, 60%, 67%, and 67%, respectively, when applying the same training sizes. The research findings have proven the feasibility of deploying the BoVW paradigm in cattle identification using local invariant features extracted from muzzle print images.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zhu, J. T., C. F. Gong, M. X. Zhao, L. Wang, and Y. Luo. "IMAGE MOSAIC ALGORITHM BASED ON PCA-ORB FEATURE MATCHING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 7, 2020): 83–89. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-83-2020.

Повний текст джерела
Анотація:
Abstract. In the process of image stitching, the ORB (Oriented FAST and Rotated BRIEF) algorithm lacks the characteristics of scale invariance and high mismatch rate. A principal component invariant feature transform (PCA-ORB, Principal Component Analysis- Oriented) is proposed. FAST and Rotated BRIEF) image stitching method. Firstly, the ORB algorithm is used to optimize the feature points to obtain the feature points with uniform distribution. Secondly, the principal component analysis (PCA) method can reduce the dimension of the traditional ORB feature descriptor and reduce the complexity of the feature point descriptor data. Thirdly, KNN (K-Nearest Neighbor) is used, and the k-nearest neighbor algorithm performs roughly matching on the feature points after dimensionality reduction. Then the random matching consistency algorithm (RANSAC, Random Sample Consensus) is used to remove the mismatched points. Finally, the fading and fading fusion algorithm is used to fuse the images. In 8 sets of simulation experiments, the image stitching speed is improved relative to the PCA-SIFT algorithm. The experimental results show that the proposed algorithm improves the image stitching speed under the premise of ensuring the quality of stitching, and can play a role in fast, real-time and large-scale applications, which are conducive to image fusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Jiang, Haili, Panpan Liu, Qingqing Yang, Liang Xu, and Shuai Zhang. "A Fast Image Matching Method Based on Improved SURF." Journal of Physics: Conference Series 2575, no. 1 (August 1, 2023): 012002. http://dx.doi.org/10.1088/1742-6596/2575/1/012002.

Повний текст джерела
Анотація:
Abstract In order to solve the problems of low matching accuracy, slow speed and high system overhead in image matching methods, a rotation binary descriptor construction method based on Speed Up Robust Features (SURF) feature point detection is designed by using different Fast Library for Approximate Nearest Neighbors (FLANN) parameters and the filtering mechanism to screen out wrong matches according to the types of feature descriptors constructed in different feature extraction algorithms. This method ensures scale and rotation invariant while simplifying the representation of feature descriptors and speeding up the calculation speed in the initial stage of matching by combining the binary characteristics of descriptors. Finally, the Hamming distance is used as the filtering mechanism to improve the success rate of the final matching. The experimental results show that the accuracy of image matching is improved by 1.5% and the matching time is improved by 0.116s, while the robustness of the image to noise and rotation is ensured.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Bozorgi, Hamed, and Ali Jafari. "Fast uniform content-based satellite image registration using the scale-invariant feature transform descriptor." Frontiers of Information Technology & Electronic Engineering 18, no. 8 (August 2017): 1108–16. http://dx.doi.org/10.1631/fitee.1500295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Saleem, Sajid, and Abdul Bais. "Visible Spectrum and Infra-Red Image Matching: A New Method." Applied Sciences 10, no. 3 (February 9, 2020): 1162. http://dx.doi.org/10.3390/app10031162.

Повний текст джерела
Анотація:
Textural and intensity changes between Visible Spectrum (VS) and Infra-Red (IR) images degrade the performance of feature points. We propose a new method based on a regression technique to overcome this problem. The proposed method consists of three main steps. In the first step, feature points are detected from VS-IR images and Modified Normalized (MN)-Scale Invariant Feature Transform (SIFT) descriptors are computed. In the second step, correct MN-SIFT descriptor matches are identified between VS-IR images with projection error. A regression model is trained on correct MN-SIFT descriptors. In the third step, the regression model is used to process the MN-SIFT descriptors of test VS images in order to remove misalignment with the MN-SIFT descriptors of test IR images and to overcome textural and intensity changes. Experiments are performed on two different VS-IR image datasets. The experimental results show that the proposed method works really well and demonstrates on average 14% and 15% better precision and matching scores compared to recently proposed Histograms of Directional Maps (HoDM) descriptor.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yu, Guorong, and Shuangming Zhao. "A New Feature Descriptor for Multimodal Image Registration Using Phase Congruency." Sensors 20, no. 18 (September 8, 2020): 5105. http://dx.doi.org/10.3390/s20185105.

Повний текст джерела
Анотація:
Images captured by different sensors with different spectral bands cause non-linear intensity changes between image pairs. Classic feature descriptors cannot handle this problem and are prone to yielding unsatisfactory results. Inspired by the illumination and contrast invariant properties of phase congruency, here, we propose a new descriptor to tackle this problem. The proposed descriptor generation mainly involves three steps. (1) Images are convolved with a bank of log-Gabor filters with different scales and orientations. (2) A window of fixed size is selected and divided into several blocks for each keypoint, and an oriented magnitude histogram and the orientation of the minimum moment of a phase congruency-based histogram are calculated in each block. (3) These two histograms are normalized respectively and concatenated to form the proposed descriptor. Performance evaluation experiments on three datasets were carried out to validate the superiority of the proposed method. Experimental results indicated that the proposed descriptor outperformed most of the classic and state-of-art descriptors in terms of precision and recall within an acceptable computational time.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Hwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi, and Hwanmyeong Yeo. "Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors." Holzforschung 76, no. 1 (January 1, 2021): 1–13. http://dx.doi.org/10.1515/hf-2021-0051.

Повний текст джерела
Анотація:
Abstract This paper describes feature-based techniques for wood knot classification. For automated classification of macroscopic wood knot images, models were established using artificial neural networks with texture and local feature descriptors, and the performances of feature extraction algorithms were compared. Classification models trained with texture descriptors, gray-level co-occurrence matrix and local binary pattern, achieved better performance than those trained with local feature descriptors, scale-invariant feature transform and dense scale-invariant feature transform. Hence, it was confirmed that wood knot classification was more appropriate for texture classification rather than an approach based on morphological classification. The gray-level co-occurrence matrix produced the highest F1 score despite representing images with relatively low-dimensional feature vectors. The scale-invariant feature transform algorithm could not detect a sufficient number of features from the knot images; hence, the histogram of oriented gradients and dense scale-invariant feature transform algorithms that describe the entire image were better for wood knot classification. The artificial neural network model provided better classification performance than the support vector machine and k-nearest neighbor models, which suggests the suitability of the nonlinear classification model for wood knot classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lu, Ying, Hui Qin Wang, Fei Xu, and Wei Guang Liu. "The Feature Extraction and Matching Algorithm Based on the Fire Video Image Orientation." Applied Mechanics and Materials 380-384 (August 2013): 3986–89. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3986.

Повний текст джерела
Анотація:
Because the SIFT (scale invariant feature transform) algorithm can not accurately locate the flame shape features and computationally intensive, this article proposed a stereo video image fire flame matching method which is a combination of Harris corner and SIFT algorithm. Firstly, the algorithm extracts image feature points using Harris operator in Gaussian scale space and defines the main directions for each feature point, and then calculates the 32-dimensional feature vectors of each feature point descriptor and the Euclidean distance to match two images. Experimental results of image matching demonstrate that the new algorithm improves the significance of the shape of the extracted feature points and keep a better match rate of 96%. At the same time the time complexity is reduced by 27.8%. This algorithm has a certain practicality.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Li, Xin, Bin Feng, Sai Qiao, Haiyan Wei, and Changli Feng. "SIFT-GVF-based lung edge correction method for correcting the lung region in CT images." PLOS ONE 18, no. 2 (February 28, 2023): e0282107. http://dx.doi.org/10.1371/journal.pone.0282107.

Повний текст джерела
Анотація:
Juxtapleural nodules were excluded from the segmented lung region in the Hounsfield unit threshold-based segmentation method. To re-include those regions in the lung region, a new approach was presented using scale-invariant feature transform and gradient vector flow models in this study. First, the scale-invariant feature transform method was utilized to detect all scale-invariant points in the binary lung region. The boundary points in the neighborhood of a scale-invariant point were collected to form the supportive boundary lines. Then, we utilized a Fourier descriptor to obtain a character representation of each supportive boundary line. Spectrum energy recognizes supportive boundaries that must be corrected. Third, the gradient vector flow-snake method was presented to correct the recognized supportive borders with a smooth profile curve, giving an ideal correction edge in those regions. Finally, the performance of the proposed method was evaluated through experiments on multiple authentic computed tomography images. The perfect results and robustness proved that the proposed method could correct the juxtapleural region precisely.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tian, Ying, and De Bin Zhang. "Ear Recognition Based on Point Feature." Applied Mechanics and Materials 380-384 (August 2013): 3840–45. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3840.

Повний текст джерела
Анотація:
In order to improve recognition rate of human ear, a method based on point feature of image for ear recognition is proposed in this paper. Firstly force field transformation theory is applied to human ear image two times in our method. It can extract the structural feature points and contour feature points of ear respectively and compose feature point set. Then feature points described by the scale invariant feature transformation descriptor. At last nearest neighbor classifier is employed for ear recognition. Feature points extracted from ear image using force field transformation are stable, reliable and discriminative, and they are insensitive to variations in image resolution. Constructing descriptor can resolve the problems caused by lower recognition owing to illumination change, scaling transformation, rotation and minute alteration caused by pose transformation. The experimental results show that the proposed algorithm not only can effectively improve ear recognition rate but also has quite good robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Feng, Qinping, Shuping Tao, Chunyu Liu, Hongsong Qu, and Wei Xu. "IFRAD: A Fast Feature Descriptor for Remote Sensing Images." Remote Sensing 13, no. 18 (September 20, 2021): 3774. http://dx.doi.org/10.3390/rs13183774.

Повний текст джерела
Анотація:
Feature description is a necessary process for implementing feature-based remote sensing applications. Due to the limited resources in satellite platforms and the considerable amount of image data, feature description—which is a process before feature matching—has to be fast and reliable. Currently, the state-of-the-art feature description methods are time-consuming as they need to quantitatively describe the detected features according to the surrounding gradients or pixels. Here, we propose a novel feature descriptor called Inter-Feature Relative Azimuth and Distance (IFRAD), which will describe a feature according to its relation to other features in an image. The IFRAD will be utilized after detecting some FAST-alike features: it first selects some stable features according to criteria, then calculates their relationships, such as their relative distances and azimuths, followed by describing the relationships according to some regulations, making them distinguishable while keeping affine-invariance to some extent. Finally, a special feature-similarity evaluator is designed to match features in two images. Compared with other state-of-the-art algorithms, the proposed method has significant improvements in computational efficiency at the expense of reasonable reductions in scale invariance.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії