Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Invariant Feature Transform.

Artykuły w czasopismach na temat „Invariant Feature Transform”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Invariant Feature Transform”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Lindeberg, Tony. "Scale Invariant Feature Transform". Scholarpedia 7, nr 5 (2012): 10491. http://dx.doi.org/10.4249/scholarpedia.10491.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Diaz-Escobar, Julia, Vitaly Kober i Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform". Mathematical Problems in Engineering 2018 (28.10.2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.

Pełny tekst źródła
Streszczenie:
Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Style APA, Harvard, Vancouver, ISO itp.
3

B.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (23.08.2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.

Pełny tekst źródła
Streszczenie:
This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error (MSE) of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.
Style APA, Harvard, Vancouver, ISO itp.
4

Taha, Mohammed A., Hanaa M. Ahmed i Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)". Webology 19, nr 1 (20.01.2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.

Pełny tekst źródła
Streszczenie:
Iris Biometric authentication is considered to be one of the most dependable biometric characteristics for identifying persons. In actuality, iris patterns have invariant, stable, and distinguishing properties for personal identification. Due to its excellent dependability in personal identification, iris recognition has received more attention. Current iris recognition methods give good results especially when NIR and specific capture conditions are used in collaboration with the user. On the other hand, values related to images captured using VW are affected by noise such as blurry images, eye skin, occlusion, and reflection, which negatively affects the overall performance of the recognition systems. In both NIR and visible spectrum iris images, this article presents an effective iris feature extraction strategy based on the scale-invariant feature transform algorithm (SIFT). The proposed method was tested on different databases such as CASIA v1 and ITTD v1, as NIR images, as well as UBIRIS v1 as visible-light color images. The proposed system gave good accuracy rates compared to existing systems, as it gave an accuracy rate of (96.2%) when using CASIA v1 and (96.4%) in ITTD v1, while the system accuracy dropped to (84.0 %) when using UBIRIS v1.
Style APA, Harvard, Vancouver, ISO itp.
5

Yu, Ying, Cai Lin Dong, Bo Wen Sheng, Wei Dan Zhong i Xiang Lin Zou. "The New Approach to the Invariant Feature Extraction Using Ridgelet Transform". Applied Mechanics and Materials 651-653 (wrzesień 2014): 2241–44. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2241.

Pełny tekst źródła
Streszczenie:
With the aim to meet the requirements of multi-directional choice, the paper raise a new approach to the invariant feature extraction of handwritten Chinese characters, with ridgelet transform as its foundation. First of all, the original images will be rotated to the Radon circular shift by means of Radon transform. On the basis of the characteristic that Fourier transform is row shift invariant, then, the one-dimensional Fourier transform will be adopted in the Radon domain to gain the conclusion that magnitude matrixes bear the rotation-invariance as a typical feature, which is pretty beneficial to the invariant feature extraction of rotation. When such is done, one-dimensional wavelet transform will be carried out in the direction of rows, thus achieving perfect choice of frequency, which makes it possible to extract the features of sub-line in the appropriate frequencies. Finally, the average values, standard deviations and the energy values will form the feature vector which is extracted from the ridgelet sub-bands. The approaches mentioned in the paper could satisfy the requirements from the form automatic processing on the recognition of handwritten Chinese characters.
Style APA, Harvard, Vancouver, ISO itp.
6

Chris, Lina Arlends, Bagus Mulyawan i Agus Budi Dharmawan. "A Leukocyte Detection System Using Scale Invariant Feature Transform Method". International Journal of Computer Theory and Engineering 8, nr 1 (luty 2016): 69–73. http://dx.doi.org/10.7763/ijcte.2016.v8.1022.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

CHEN, G. Y., i W. F. XIE. "CONTOUR-BASED FEATURE EXTRACTION USING DUAL-TREE COMPLEX WAVELETS". International Journal of Pattern Recognition and Artificial Intelligence 21, nr 07 (listopad 2007): 1233–45. http://dx.doi.org/10.1142/s0218001407005867.

Pełny tekst źródła
Streszczenie:
A contour-based feature extraction method is proposed by using the dual-tree complex wavelet transform and the Fourier transform. Features are extracted from the 1D signals r and θ, and hence the processing memory and time are reduced. The approximate shift-invariant property of the dual-tree complex wavelet transform and the Fourier transform guarantee that this method is invariant to translation, rotation and scaling. The method is used to recognize aircrafts from different rotation angles and scaling factors. Experimental results show that it achieves better recognition rates than that which uses only the Fourier features and Granlund's method. Its success is due to the desirable shift invariant property of the dual-tree complex wavelet transform, the translation invariant property of the Fourier spectrum, and our new complete representation of the outer contour of the pattern.
Style APA, Harvard, Vancouver, ISO itp.
8

Hwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi i Hwanmyeong Yeo. "Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors". Holzforschung 76, nr 1 (1.01.2021): 1–13. http://dx.doi.org/10.1515/hf-2021-0051.

Pełny tekst źródła
Streszczenie:
Abstract This paper describes feature-based techniques for wood knot classification. For automated classification of macroscopic wood knot images, models were established using artificial neural networks with texture and local feature descriptors, and the performances of feature extraction algorithms were compared. Classification models trained with texture descriptors, gray-level co-occurrence matrix and local binary pattern, achieved better performance than those trained with local feature descriptors, scale-invariant feature transform and dense scale-invariant feature transform. Hence, it was confirmed that wood knot classification was more appropriate for texture classification rather than an approach based on morphological classification. The gray-level co-occurrence matrix produced the highest F1 score despite representing images with relatively low-dimensional feature vectors. The scale-invariant feature transform algorithm could not detect a sufficient number of features from the knot images; hence, the histogram of oriented gradients and dense scale-invariant feature transform algorithms that describe the entire image were better for wood knot classification. The artificial neural network model provided better classification performance than the support vector machine and k-nearest neighbor models, which suggests the suitability of the nonlinear classification model for wood knot classification.
Style APA, Harvard, Vancouver, ISO itp.
9

Huang, Yongdong, Jianwei Yang, Sansan Li i Wenzhen Du. "Polar radius integral transform for affine invariant feature extraction". International Journal of Wavelets, Multiresolution and Information Processing 15, nr 01 (styczeń 2017): 1750005. http://dx.doi.org/10.1142/s0219691317500059.

Pełny tekst źródła
Streszczenie:
Affine transform is to describe the same target at different viewpoints to obtain the relationship between images of approximate model. Affine invariant feature extraction plays an important role in object recognition and image registration. Firstly, the definition of polar radius integral transform (PRIT) is put forward by means of the characterization of affine transform mapping straight line into straight line, where PRIT computes the integral along the polar radius direction and converts images into closed curves which keep the same affine transform with original images. Secondly, in order to extract affine invariant feature, an affine invariant feature extraction algorithm is also given based on PRIT. The proposed algorithm can be used to combine contour-based methods with region-based methods. It has some advantages of fewer amounts of computations and feasibility of feature extraction for objects with several components. Finally, the capability of anti-noise (Gaussian noise, salt and pepper noise) of PRIT is discussed. The simulation experiment results show that PRIT can effectively extract the affine invariant features, but also the low order PRIT has very strong robustness to noise.
Style APA, Harvard, Vancouver, ISO itp.
10

Wu, Shu Guang, Shu He i Xia Yang. "The Application of SIFT Method towards Image Registration". Advanced Materials Research 1044-1045 (październik 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.

Pełny tekst źródła
Streszczenie:
The scale invariant features transform (SIFT) is commonly used in object recognition,According to the problems of large memory consumption and low computation speed in SIFT (Scale Invariant Feature Transform) algorithm.During the image registration methods based on point features,SIFT point feature is invariant to image scale and rotation, and provides robust matching across a substantial range of affine distortion. Experiments show that on the premise that registration accuracy is stable, the proposed algorithm solves the problem of high requirement of memory and the efficiency is improved greatly, which is applicable for registering remote sensing images of large areas.
Style APA, Harvard, Vancouver, ISO itp.
11

Tikhomirova, T. A., G. T. Fedorenko, K. M. Nazarenko i E. S. Nazarenko. "LEFT: LOCAL EDGE FEATURES TRANSFORM". Vestnik komp'iuternykh i informatsionnykh tekhnologii, nr 189 (marzec 2020): 11–18. http://dx.doi.org/10.14489/vkit.2020.03.pp.011-018.

Pełny tekst źródła
Streszczenie:
To detect point correspondence between images or 3D scenes, local texture descriptors, such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-Up Robust Features), BRIEF (Binary Robust Independent Elementary Features), and others, are usually used. Formally they provide invariance to image rotation and scale, but this properties are achieved only approximately due to discrete number of evaluable orientations and scales stored into the descriptor. Feature points preferable for such descriptors usually are not belong to actual object boundaries into 3D scenes and so are hard to be used into apipolar relationships. At the same time, linking the feature point to large-scale lines and edges is preferable for SLAM (Simultaneous Localization And Mapping) tasks, because their appearance are the most resistible to daily, seasonal and weather variations.In this paper, original feature points descriptor LEFT (Local Edge Features Transform) for edge images are proposed. LEFT accumulate directions and contrasts of alternative strait segments tangent to lines and edges in the vicinity of feature points. Due to this structure, mutual orientation of LEFT descriptors are evaluated and taken into account directly at the stage of their comparison. LEFT descriptors adapt to the shape of contours in the vicinity of feature points, so they can be used to analyze local and global geometric distortions of a various nature. The article presents the results of comparative testing of LEFT and common texture-based descriptors and considers alternative ways of representing them in a computer vision system.
Style APA, Harvard, Vancouver, ISO itp.
12

Qu, Zhong, i Zheng Yong Wang. "The Improved Algorithm of Scale Invariant Feature Transform on Palmprint Recognition". Advanced Materials Research 186 (styczeń 2011): 565–69. http://dx.doi.org/10.4028/www.scientific.net/amr.186.565.

Pełny tekst źródła
Streszczenie:
This paper presents a new method of palmprint recognition based on improved scale invariant feature transform (SIFT) algorithm which combines the Euclidean distance and weighted sub-region. It has the scale, rotation, affine, perspective, illumination invariance, and also has good robustness to the target's motion, occlusion, noise and other factors. Simulation results show that the recognition rate of the improved SIFT algorithm is higher than the recognition rate of SIFT algorithm.
Style APA, Harvard, Vancouver, ISO itp.
13

Gao, Junchai, i Zhen Sun. "An Improved ASIFT Image Feature Matching Algorithm Based on POS Information". Sensors 22, nr 20 (12.10.2022): 7749. http://dx.doi.org/10.3390/s22207749.

Pełny tekst źródła
Streszczenie:
The affine scale-invariant feature transform (ASIFT) algorithm is a feature extraction algorithm with affinity and scale invariance, which is suitable for image feature matching using unmanned aerial vehicles (UAVs). However, there are many problems in the matching process, such as the low efficiency and mismatching. In order to improve the matching efficiency, this algorithm firstly simulates image distortion based on the position and orientation system (POS) information from real-time UAV measurements to reduce the number of simulated images. Then, the scale-invariant feature transform (SIFT) algorithm is used for feature point detection, and the extracted feature points are combined with the binary robust invariant scalable keypoints (BRISK) descriptor to generate the binary feature descriptor, which is matched using the Hamming distance. Finally, in order to improve the matching accuracy of the UAV images, based on the random sample consensus (RANSAC) a false matching eliminated algorithm is proposed. Through four groups of experiments, the proposed algorithm is compared with the SIFT and ASIFT. The results show that the algorithm can optimize the matching effect and improve the matching speed.
Style APA, Harvard, Vancouver, ISO itp.
14

Madasu Hanmandlu, Madasu Hanmandlu, Abdul Ansari, Jaspreet Kour, Kunal Goyal i Rutvik Malekar. "Scale Invariant Feature Transform Based Fingerprint Corepoint Detection". Defence Science Journal 63, nr 4 (22.07.2013): 402–7. http://dx.doi.org/10.14429/dsj.63.2708.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Ye, Zhang, i Qu Hongsong. "Rotation invariant feature lines transform for image matching". Journal of Electronic Imaging 23, nr 5 (5.09.2014): 053002. http://dx.doi.org/10.1117/1.jei.23.5.053002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Ding, Xintao, Yonglong Luo, Yunyun Yi, Biao Jie, Taochun Wang i Weixin Bian. "Orthogonal design for scale invariant feature transform optimization". Journal of Electronic Imaging 25, nr 05 (17.10.2016): 1. http://dx.doi.org/10.1117/1.jei.25.5.053030.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Zhong, Sheng-hua, Yan Liu i Qing-cai Chen. "Visual orientation inhomogeneity based scale-invariant feature transform". Expert Systems with Applications 42, nr 13 (sierpień 2015): 5658–67. http://dx.doi.org/10.1016/j.eswa.2015.01.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Kasiselvanathan, M., V. Sangeetha i A. Kalaiselvi. "Palm pattern recognition using scale invariant feature transform". International Journal of Intelligence and Sustainable Computing 1, nr 1 (2020): 44. http://dx.doi.org/10.1504/ijisc.2020.104826.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Cheung, W., i G. Hamarneh. "$n$-SIFT: $n$-Dimensional Scale Invariant Feature Transform". IEEE Transactions on Image Processing 18, nr 9 (wrzesień 2009): 2012–21. http://dx.doi.org/10.1109/tip.2009.2024578.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Sahan, Ali Sahan, Nisreen Jabr, Ahmed Bahaaulddin i Ali Al-Itb. "Human identification using finger knuckle features". International Journal of Advances in Soft Computing and its Applications 14, nr 1 (28.03.2022): 88–101. http://dx.doi.org/10.15849/ijasca.220328.07.

Pełny tekst źródła
Streszczenie:
Abstract Many studies refer that the figure knuckle comprises unique features. Therefore, it can be utilized in a biometric system to distinguishing between the peoples. In this paper, a combined global and local features technique has been proposed based on two descriptors, namely: Chebyshev Fourier moments (CHFMs) and Scale Invariant Feature Transform (SIFT) descriptors. The CHFMs descriptor is used to gaining the global features, while the scale invariant feature transform descriptor is utilized to extract local features. Each one of these descriptors has its advantages; therefore, combining them together leads to produce distinct features. Many experiments have been carried out using IIT-Delhi knuckle database to assess the accuracy of the proposed approach. The analysis of the results of these extensive experiments implies that the suggested technique has gained 98% accuracy rate. Furthermore, the robustness against the noise has been evaluated. The results of these experiments lead to concluding that the proposed technique is robust against the noise variation. Keywords: finger knuckle, biometric system, Chebyshev Fourier moments, scale invariant feature transform, IIT-Delhi knuckle database.
Style APA, Harvard, Vancouver, ISO itp.
21

Chen, Baifan, Hong Chen, Baojun Song i Grace Gong. "TIF-Reg: Point Cloud Registration with Transform-Invariant Features in SE(3)". Sensors 21, nr 17 (27.08.2021): 5778. http://dx.doi.org/10.3390/s21175778.

Pełny tekst źródła
Streszczenie:
Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.
Style APA, Harvard, Vancouver, ISO itp.
22

Azeem, A., M. Sharif, J. H. Shah i M. Raza. "Hexagonal scale invariant feature transform (H-SIFT) for facial feature extraction". Journal of Applied Research and Technology 13, nr 3 (czerwiec 2015): 402–8. http://dx.doi.org/10.1016/j.jart.2015.07.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Kwon, Oh-Seol, i Yeong-Ho Ha. "Panoramic video using scale-invariant feature transform with embedded color-invariant values". IEEE Transactions on Consumer Electronics 56, nr 2 (maj 2010): 792–98. http://dx.doi.org/10.1109/tce.2010.5506003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Prasad K, Durga, Manjunathachari K i Giri Prasad M.N. "Orientation Feature Transform Model for Image Retrieval in Sketch Based Image Retrieval System". International Journal of Engineering & Technology 7, nr 2.24 (25.04.2018): 159. http://dx.doi.org/10.14419/ijet.v7i2.24.12022.

Pełny tekst źródła
Streszczenie:
This paper focus on Image retrieval using Sketch based image retrieval system. The low complexity model for image representation has given the sketch based image retrieval (SBIR) a optimal selection for next generation application in low resource environment. The SBIR approach uses the geometrical region representation to describe the feature and utilize for recognition. In the SBIR model, the features represented define the image. Towards the improvement of SBIR recognition performance, in this paper a new invariant modeling using “orientation feature transformed modeling” is proposed. The approach gives the enhancement of invariant property and retrieval performance improvement in transformed domain. The experimental results illustrate the significance of invariant orientation feature representation in SBIR over the conventional models.
Style APA, Harvard, Vancouver, ISO itp.
25

Chang, Wen-Yang, i Chih-Ping Tsai. "Illumination characteristics and image stitching for automatic inspection of bicycle part". Assembly Automation 34, nr 4 (9.09.2014): 342–48. http://dx.doi.org/10.1108/aa-09-2013-076.

Pełny tekst źródła
Streszczenie:
Purpose – This study aims to investigate the spectral illumination characteristics and geometric features of bicycle parts and proposes an image stitching method for their automatic visual inspection. Design/methodology/approach – The unrealistic color casts of feature inspection is removed using white balance for global adjustment. The scale-invariant feature transforms (SIFT) is used to extract and detect the image features of image stitching. The Hough transform is used to detect the parameters of a circle for roundness of bicycle parts. Findings – Results showed that maximum errors of 0°, 10°, 20°, 30°, 40° and 50° for the spectral illumination of white light light-emitting diode arrays with differential shift displacements are 4.4, 4.2, 7.8, 6.8, 8.1 and 3.5 per cent, respectively. The deviation error of image stitching for the stem accessory in x and y coordinates are 2 pixels. The SIFT and RANSAC enable to transform the stem image into local feature coordinates that are invariant to the illumination change. Originality/value – This study can be applied to many fields of modern industrial manufacturing and provide useful information for automatic inspection and image stitching.
Style APA, Harvard, Vancouver, ISO itp.
26

Wang, Yan Wei, i Hui Li Yu. "Medical Image Feature Matching Based on Wavelet Transform and SIFT Algorithm". Applied Mechanics and Materials 65 (czerwiec 2011): 497–502. http://dx.doi.org/10.4028/www.scientific.net/amm.65.497.

Pełny tekst źródła
Streszczenie:
A feature matching algorithm based on wavelet transform and SIFT is proposed in this paper, Firstly, Biorthogonal wavelet transforms algorithm is used for medical image to delaminating, and restoration the processed image. Then the SIFT (Scale Invariant Feature Transform) applied in this paper to abstracting key point. Experimental results show that our algorithm compares favorably in high-compressive ratio, the rapid matching speed and low storage of the image, especially for the tilt and rotation conditions.
Style APA, Harvard, Vancouver, ISO itp.
27

GENG, LICHUAN, SONGZHI SU, DONGLIN CAO i SHAOZI LI. "PERSPECTIVE-INVARIANT IMAGE MATCHING FRAMEWORK WITH BINARY FEATURE DESCRIPTOR AND APSO". International Journal of Pattern Recognition and Artificial Intelligence 28, nr 08 (grudzień 2014): 1455011. http://dx.doi.org/10.1142/s0218001414550118.

Pełny tekst źródła
Streszczenie:
A novel perspective invariant image matching framework is proposed in this paper, noted as Perspective-Invariant Binary Robust Independent Elementary Features (PBRIEF). First, we use the homographic transformation to simulate the distortion between two corresponding patches around the feature points. Then, binary descriptors are constructed by comparing the intensity of sample points surrounding the feature location. We transform the location of the sample points with simulated homographic matrices. This operation is to ensure that the intensities which we compared are the realistic corresponding pixels between two image patches. Since the exact perspective transform matrix is unknown, an Adaptive Particle Swarm Optimization (APSO) algorithm-based iterative procedure is proposed to estimate the real transformation angles. Experimental results obtained on five different datasets show that PBRIEF outperforms significantly the existing methods on images with large viewpoint difference. Moreover, the efficiency of our framework is also improved comparing with Affine-Scale Invariant Feature Transform (ASIFT).
Style APA, Harvard, Vancouver, ISO itp.
28

Su, Ching-Liang. "Object Identification by Signal Gain and Correlation". International Journal of Mathematical Models and Methods in Applied Sciences 16 (12.01.2022): 12–22. http://dx.doi.org/10.46300/9101.2022.16.3.

Pełny tekst źródła
Streszczenie:
In this study, “ring rotation invariant transform” techniques are used to add more salient feature to the original images. The “ring rotation invariant transform” can solve image rotation problem, which transfers a ring signal to several signal vectors in the complex domain, whereby to generate invariant magnitude. Matrix correlation is employed to combine these magnitudes to generate the various discriminators, by which to identify objects. For managing image-shifting problem, one pixel in sample image is compared with surrounding pixels of unknown image. The comparison approaching in this study is by the basis of pixel-to-pixel-comparisons.
Style APA, Harvard, Vancouver, ISO itp.
29

MA, KUN, i XIAOOU TANG. "EXPERIMENTAL STUDY OF TRANSLATION-INVARIANT DWT FACE FEATURE ESTIMATION". International Journal of Wavelets, Multiresolution and Information Processing 02, nr 03 (wrzesień 2004): 313–21. http://dx.doi.org/10.1142/s021969130400055x.

Pełny tekst źródła
Streszczenie:
In this paper, we conduct a series of experiments to evaluate the translation invariant property of a set of discrete wavelet features in a face graph. Using local-area power spectrum estimation based on discrete wavelet transform, we compute a feature vector that possesses both efficient space-frequency structure and translation invariant properties.
Style APA, Harvard, Vancouver, ISO itp.
30

Guo, Hongjun, i Lili Chen. "An Image Similarity Invariant Feature Extraction Method Based on Radon Transform". International Journal of Circuits, Systems and Signal Processing 15 (8.04.2021): 288–96. http://dx.doi.org/10.46300/9106.2021.15.33.

Pełny tekst źródła
Streszczenie:
With the advancements of computer technology, image recognition technology has been more and more widely applied and feature extraction is a core problem of image recognition. In study, image recognition classifies the processed image and identifies the category it belongs to. By selecting the feature to be extracted, it measures the necessary parameters and classifies according to the result. For better recognition, it needs to conduct structural analysis and image description of the entire image and enhance image understanding through multi-object structural relationship. The essence of Radon transform is to reconstruct the original N-dimensional image in N-dimensional space according to the N-1 dimensional projection data of N-dimensional image in different directions. The Radon transform of image is to extract the feature in the transform domain and map the image space to the parameter space. This paper study the inverse problem of Radon transform of the upper semicircular curve with compact support and continuous in the support. When the center and radius of a circular curve change in a certain range, the inversion problem is unique when the Radon transform along the upper semicircle curve is known. In order to further improve the robustness and discrimination of the features extracted, given the image translation or proportional scaling and the removal of impact caused by translation and proportion, this paper has proposed an image similarity invariant feature extraction method based on Radon transform, constructed Radon moment invariant and shown the description capacity of shape feature extraction method on shape feature by getting intra-class ratio. The experiment result has shown that the method of this paper has overcome the flaws of cracks, overlapping, fuzziness and fake edges which exist when extracting features alone, it can accurately extract the corners of the digital image and has good robustness to noise. It has effectively improved the accuracy and continuity of complex image feature extraction.
Style APA, Harvard, Vancouver, ISO itp.
31

VYAS, VIBHA S., i PRITI P. REGE. "GEOMETRIC TRANSFORM INVARIANT TEXTURE ANALYSIS WITH MODIFIED CHEBYSHEV MOMENTS BASED ALGORITHM". International Journal of Image and Graphics 09, nr 04 (październik 2009): 559–74. http://dx.doi.org/10.1142/s0219467809003587.

Pełny tekst źródła
Streszczenie:
Texture based Geometric invariance, which comprises of rotation scale and translation (RST) invariant is finding application in various areas including industrial inspection, estimation of object range and orientation, shape analysis, satellite imaging, and medical diagnosis. Moments based techniques, apart from being computationally simple as compared to other RST invariant texture operators, are also robust in presence of noise. Zernike moments (ZM) based techniques are one of the well-established methods used for texture identification. As ZM are continuous moments, when discretization is done for implementation, errors are introduced. Error, calculated as difference between theoretically computed values and simulated values is proved to be prominent for fine textures. Therefore, a novel approach to detect RST invariant textures present in image is presented in this paper. This approach calculates discrete Chebyshev moments (CM) of log-polar transformed images to achieve rotation and scale invariance. The image is made translation invariant by shifting it to its centroid. The data is collected as samples from Brodatz and Vistex data sets. Zernike moments and its modifications, along with proposed scheme are applied to the same and Performance evaluation apart from RST invariance is noise sensitivity and redundancy. The performance is also compared with circular Mellin Feature extractors.
Style APA, Harvard, Vancouver, ISO itp.
32

Al-khafaji, Suhad Lateef, Jun Zhou, Ali Zia i Alan Wee-Chung Liew. "Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images". IEEE Transactions on Image Processing 27, nr 2 (luty 2018): 837–50. http://dx.doi.org/10.1109/tip.2017.2749145.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Jo, J., G. Yoon, D. Ko, E. C. Lee, S. Kim i M. Y. Choi. "Infrared Finger Biometric Using Scale Invariant Feature Transform Correspondences". Advanced Science Letters 21, nr 3 (1.03.2015): 365–71. http://dx.doi.org/10.1166/asl.2015.5789.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Ahmed, Farid, i Mohammad A. Karim. "Filter-feature-based rotation-invariant joint Fourier transform correlator". Applied Optics 34, nr 32 (10.11.1995): 7556. http://dx.doi.org/10.1364/ao.34.007556.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Hwang, Sung-Wook, Kayoko Kobayashi, Shengcheng Zhai i Junji Sugiyama. "Automated identification of Lauraceae by scale-invariant feature transform". Journal of Wood Science 64, nr 2 (12.12.2017): 69–77. http://dx.doi.org/10.1007/s10086-017-1680-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Li, Shih-An, Wei-Yen Wang, Wei-Zheng Pan, Chen-Chien James Hsu i Cheng-Kai Lu. "FPGA-Based Hardware Design for Scale-Invariant Feature Transform". IEEE Access 6 (2018): 43850–64. http://dx.doi.org/10.1109/access.2018.2863019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

A, Kalaiselvi, Sangeetha V i Kasiselvanathan M. "Palm Pattern Recognition using Scale Invariant Feature Transform (SIFT)". International Journal of Intelligence and Sustainable Computing 1, nr 1 (2018): 1. http://dx.doi.org/10.1504/ijisc.2018.10023048.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Alkaff, Muhammad, Husnul Khatimi, Nur Lathifah i Yuslena Sari. "Sasirangan Motifs Classification using Scale- Invariant Feature Transform (SIFT) and Support Vector Machine (SVM)". MATEC Web of Conferences 280 (2019): 05023. http://dx.doi.org/10.1051/matecconf/201928005023.

Pełny tekst źródła
Streszczenie:
Sasirangan is one of the traditional cloth from Indonesia. Specifically, it comes from South Borneo. It has many variations of motifs with a different meaning for each pattern. This paper proposes a prototype of Sasirangan motifs classification using four (4) type of Sasirangan motifs namely Hiris Gagatas, Gigi Haruan, Kulat Kurikit, and Hiris Pudak. We used primary data of Sasirangan images collected from Kampung Sasirangan, Banjarmasin, South Kalimantan. After that, the images are processed using Scale-Invariant Feature Transform (SIFT) to extract its features. Furthermore, the extracted features vectors obtained is classified using the Support Vector Machine (SVM). The result shows that the Scale- Invariant Feature Transform (SIFT) feature extraction with Support Vector Machine (SVM) classification able to classify Sasirangan motifs with an overall accuracy of 95%.
Style APA, Harvard, Vancouver, ISO itp.
39

Xu, Mengxi, Yingshu Lu i Xiaobin Wu. "Annular Spatial Pyramid Mapping and Feature Fusion-Based Image Coding Representation and Classification". Wireless Communications and Mobile Computing 2020 (11.09.2020): 1–9. http://dx.doi.org/10.1155/2020/8838454.

Pełny tekst źródła
Streszczenie:
Conventional image classification models commonly adopt a single feature vector to represent informative contents. However, a single image feature system can hardly extract the entirety of the information contained in images, and traditional encoding methods have a large loss of feature information. Aiming to solve this problem, this paper proposes a feature fusion-based image classification model. This model combines the principal component analysis (PCA) algorithm, processed scale invariant feature transform (P-SIFT) and color naming (CN) features to generate mutually independent image representation factors. At the encoding stage of the scale-invariant feature transform (SIFT) feature, the bag-of-visual-word model (BOVW) is used for feature reconstruction. Simultaneously, in order to introduce the spatial information to our extracted features, the rotation invariant spatial pyramid mapping method is introduced for the P-SIFT and CN feature division and representation. At the stage of feature fusion, we adopt a support vector machine with two kernels (SVM-2K) algorithm, which divides the training process into two stages and finally learns the knowledge from the corresponding kernel matrix for the classification performance improvement. The experiments show that the proposed method can effectively improve the accuracy of image description and the precision of image classification.
Style APA, Harvard, Vancouver, ISO itp.
40

WU, Yin-chu, i Rong MA. "Image feature extraction and matching of color-based scale-invariant feature transform". Journal of Computer Applications 31, nr 4 (8.06.2011): 1024–26. http://dx.doi.org/10.3724/sp.j.1087.2011.01024.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Chen, Wenyu, i Yanli Zhao. "Improve the Image Feature-Matching Based on Scale Invariant Feature Transform Algorithm". Advanced Science Letters 6, nr 1 (15.03.2012): 770–73. http://dx.doi.org/10.1166/asl.2012.2255.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

LI, QIAOLIANG, HUISHENG ZHANG i TIANFU WANG. "SCALE INVARIANT FEATURE MATCHING USING ROTATION-INVARIANT DISTANCE FOR REMOTE SENSING IMAGE REGISTRATION". International Journal of Pattern Recognition and Artificial Intelligence 27, nr 02 (marzec 2013): 1354004. http://dx.doi.org/10.1142/s0218001413540049.

Pełny tekst źródła
Streszczenie:
Scale invariant feature transform (SIFT) has been widely used in image matching. But when SIFT is introduced in the registration of remote sensing images, the keypoint pairs which are expected to be matched are often assigned two different value of main orientation owing to the significant difference in the image intensity between remote sensing image pairs, and therefore a lot of incorrect matches of keypoints will appear. This paper presents a method using rotation-invariant distance instead of Euclid distance to match the scale invariant feature vectors associated with the keypoints. In the proposed method, the feature vectors are reorganized into feature matrices, and fast Fourier transform (FFT) is introduced to compute the rotation-invariant distance between the matrices. Much more correct matches are obtained by the proposed method since the rotation-invariant distance is independent of the main orientation of the keypoints. Experimental results indicate that the proposed method improves the match performance compared to other state-of-art methods in terms of correct match rate and aligning accuracy.
Style APA, Harvard, Vancouver, ISO itp.
43

Javeed.S, Imran, Aanandha Saravanan i Rajendra Kumar. "Efficient Biometric Recognition Methodology using Guided Filtering and SIFT Feature Matching". International Journal of Engineering & Technology 7, nr 3.1 (4.08.2018): 23. http://dx.doi.org/10.14419/ijet.v7i3.1.16789.

Pełny tekst źródła
Streszczenie:
A novel infrared finger vein biometric identification is proposed using Linear Gabor filter with Guidance image and SIFT feature matching. Linear Gabor filter with guidance image is used for extracting finger vein pattern without segmentation processing and also performs well with some poor quality images due to low contrast, illuminance imbalance or noise etc. Firstly, we utilized Guided Linear Gabor filter for ridge detection as simple Linear Gabor filter and also enhance the image by performing edge preserving smoothing operation. Secondly we utilized SIFT feature matching for verification. A SIFT (Scale Invariant Feature Transform) can extract features to posses rotation invariance and shift invariance for providing better matching rate. The simulation analysis shows our proposed system is an effective feature for finger vein biometric identification.
Style APA, Harvard, Vancouver, ISO itp.
44

SUGI, T., DEJEY i R. S. RAJESH. "GEOMETRIC ATTACK RESISTANT ROBUST IMAGE WATERMARKING SCHEME". International Journal of Information Acquisition 09, nr 01 (marzec 2013): 1350008. http://dx.doi.org/10.1142/s0219878913500083.

Pełny tekst źródła
Streszczenie:
A new watermarking approach based on affine Legendre moment invariants (ALMIs) and local characteristic regions (LCRs) which allows watermark detection and extraction under affine transformation attacks is presented in this paper. It is a non-blind watermarking scheme. Original image color image is converted into HSV color space and divided into four parts. LCR is constructed and a set of affine invariants are derived on LCRs based on Legendre moments for each part. These invariants can be used for estimating the affine transform coefficients on the LCRs. ALMIs are used for watermark embedding, detection and extraction as they provide synchronization and invariant feature which is necessary for a robust watermarking scheme. The proposed scheme shows resistance to geometric distortion, cropping, filtering, compression, and additive noise than the existing ALMI based scheme [Alghoniemy, M. and Tewfik, A. H. [2004] "Geometric invariance in image watermarking," IEEE Trans. Image Process13(2), 145–153] and affine geometric moment invariant (AGMI) based scheme [Seo, J. S. and Yoo, C. D. [2006] "Image watermarking based on invariant regions of scale-space representation," IEEE Trans. Signal Process. 54(4), 1537–1549].
Style APA, Harvard, Vancouver, ISO itp.
45

Wang, Peng-Fei, Xiao-Qing Luo, Xin-Yi Li i Zhan-Cheng Zhang. "Image fusion based on shift invariant shearlet transform and stacked sparse autoencoder". Journal of Algorithms & Computational Technology 12, nr 2 (9.01.2018): 73–84. http://dx.doi.org/10.1177/1748301817741001.

Pełny tekst źródła
Streszczenie:
Stacked sparse autoencoder is an efficient unsupervised feature extraction method, which has excellent ability in representation of complex data. Besides, shift invariant shearlet transform is a state-of-the-art multiscale decomposition tool, which is superior to traditional tools in many aspects. Motivated by the advantages mentioned above, a novel stacked sparse autoencoder and shift invariant shearlet transform-based image fusion method is proposed. First, the source images are decomposed into low- and high-frequency subbands by shift invariant shearlet transform; second, a two-layer stacked sparse autoencoder is adopted as a feature extraction method to get deep and sparse representation of high-frequency subbands; third, a stacked sparse autoencoder feature-based choose-max fusion rule is proposed to fuse the high-frequency subband coefficients; then, a weighted average fusion rule is adopted to merge the low-frequency subband coefficients; finally, the fused image is obtained by inverse shift invariant shearlet transform. Experimental results show the proposed method is superior to the conventional methods both in terms of subjective and objective evaluations.
Style APA, Harvard, Vancouver, ISO itp.
46

Liang, Dong, Pu Yan, Ming Zhu, Yizheng Fan i Kui Wang. "Spectral matching algorithm based on nonsubsampled contourlet transform and scale-invariant feature transform". Journal of Systems Engineering and Electronics 23, nr 3 (czerwiec 2012): 453–59. http://dx.doi.org/10.1109/jsee.2012.00057.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Umale, Prajakta, Aboli Patil, Chanchal Sahani, Anisha Gedam i Kajal Kawale. "PLANER OBJECT DETECTION USING SURF AND SIFT METHOD". International Journal of Engineering Applied Sciences and Technology 6, nr 11 (1.03.2022): 36–39. http://dx.doi.org/10.33564/ijeast.2022.v06i11.008.

Pełny tekst źródła
Streszczenie:
Object Detection refers to the capability of computer and software to locate objects in an image/scene and identify each object. Object detection is a computer vision technique works to identify and locate objects within an image or video. In this study, we compare and analyze Scale-invariant feature transform (SIFT) and speeded up robust features (SURF) and propose a various geometric transformation. To increase the accuracy, the proposed system firstly performs the separation of the image by reducing the pixel size, using the Scale-invariant feature transform (SIFT). Then the key points are picked around feature description regions. After that we perform one more geometric transformation which is rotation, and is used to improve visual appearance of image. By using this, we perform Speeded Up Robust Features (SURF) feature which highlights the high pixel value of the image. After that we compare two different images and by comparing all features of that object from image, the desired object detected in a scene.
Style APA, Harvard, Vancouver, ISO itp.
48

Rajasekhar, D., T. Jayachandra Prasad i K. Soundararajan. "An affine view and illumination invariant iterative image matching approach for face recognition". International Journal of Engineering & Technology 7, nr 2.8 (19.03.2018): 42. http://dx.doi.org/10.14419/ijet.v7i2.8.10321.

Pełny tekst źródła
Streszczenie:
Feature detection and image matching constitutes two primary tasks in photogrammetric and have multiple applications in a number of fields. One such application is face recognition. The critical nature of this application demands that image matching algorithm used in recognition of features in facial recognition to be robust and fast. The proposed method uses affine transforms to recognize the descriptors and classified by means of Bayes theorem. This paper demonstrates the suitability of the proposed image matching algorithm for use in face recognition appli-cations. Yale facial data set is used in the validation and the results are compared with SIFT (Scale Invariant Feature Transform) based face recognition approach.
Style APA, Harvard, Vancouver, ISO itp.
49

Zhang, Hua-Zhen, Dong-Won Kim, Tae-Koo Kang i Myo-Taeg Lim. "MIFT: A Moment-Based Local Feature Extraction Algorithm". Applied Sciences 9, nr 7 (11.04.2019): 1503. http://dx.doi.org/10.3390/app9071503.

Pełny tekst źródła
Streszczenie:
We propose a local feature descriptor based on moment. Although conventional scale invariant feature transform (SIFT)-based algorithms generally use difference of Gaussian (DoG) for feature extraction, they remain sensitive to more complicated deformations. To solve this problem, we propose MIFT, an invariant feature transform algorithm based on the modified discrete Gaussian-Hermite moment (MDGHM). Taking advantage of MDGHM’s high performance to represent image information, MIFT uses an MDGHM-based pyramid for feature extraction, which can extract more distinctive extrema than the DoG, and MDGHM-based magnitude and orientation for feature description. We compared the proposed MIFT method performance with current best practice methods for six image deformation types, and confirmed that MIFT matching accuracy was superior of other SIFT-based methods.
Style APA, Harvard, Vancouver, ISO itp.
50

Afifi, Ahmed J., i Wesam M. Ashour. "Image Retrieval Based on Content Using Color Feature". ISRN Computer Graphics 2012 (15.03.2012): 1–11. http://dx.doi.org/10.5402/2012/248285.

Pełny tekst źródła
Streszczenie:
Content-based image retrieval from large resources has become an area of wide interest in many applications. In this paper we present a CBIR system that uses Ranklet Transform and the color feature as a visual feature to represent the images. Ranklet Transform is proposed as a preprocessing step to make the image invariant to rotation and any image enhancement operations. To speed up the retrieval time, images are clustered according to their features using k-means clustering algorithm.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii