Добірка наукової літератури з теми "Scale Invariant Feature Descriptor"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Scale Invariant Feature Descriptor".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Scale Invariant Feature Descriptor"

1

Sahan, Ali Sahan, Nisreen Jabr, Ahmed Bahaaulddin, and Ali Al-Itb. "Human identification using finger knuckle features." International Journal of Advances in Soft Computing and its Applications 14, no. 1 (March 28, 2022): 88–101. http://dx.doi.org/10.15849/ijasca.220328.07.

Повний текст джерела
Анотація:
Abstract Many studies refer that the figure knuckle comprises unique features. Therefore, it can be utilized in a biometric system to distinguishing between the peoples. In this paper, a combined global and local features technique has been proposed based on two descriptors, namely: Chebyshev Fourier moments (CHFMs) and Scale Invariant Feature Transform (SIFT) descriptors. The CHFMs descriptor is used to gaining the global features, while the scale invariant feature transform descriptor is utilized to extract local features. Each one of these descriptors has its advantages; therefore, combining them together leads to produce distinct features. Many experiments have been carried out using IIT-Delhi knuckle database to assess the accuracy of the proposed approach. The analysis of the results of these extensive experiments implies that the suggested technique has gained 98% accuracy rate. Furthermore, the robustness against the noise has been evaluated. The results of these experiments lead to concluding that the proposed technique is robust against the noise variation. Keywords: finger knuckle, biometric system, Chebyshev Fourier moments, scale invariant feature transform, IIT-Delhi knuckle database.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Xinggui, Ping Yang, Bing Ran, Hao Xian, and Yong Liu. "Long-distance deformation object recognition by integrating contour structure and scale-invariant heat kernel signature." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 3241–57. http://dx.doi.org/10.3233/jifs-191649.

Повний текст джерела
Анотація:
The tough challenges of object recognition in long-distance scene involves contour shape deformation invariant features construction. In this work, an effective contour shape descriptor integrating critical points structure and Scale-invariant Heat Kernel Signature (SI-HKS) is proposed for long-distance object recognition. We firstly propose a general feature fusion model. Then, we capture the object contour structure feature with Critical-points Inner-distance Shape Context (CP-IDSC). Meanwhile, we pull-in the SI-HKS for capturing the local deformation-invariant properties of 2D shape. Based on the integration of the above two feature descriptors, the fusion descriptor is compacted by mapping into a low dimensional subspace using the bags-of-features, allowing for an efficient Bayesian classifier recognition. The extensive experiments on synthetic turbulence-degraded shapes and real-life infrared image show that the proposed method outperformed other compared approaches in terms of the recognition precision and robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Diaz-Escobar, Julia, Vitaly Kober, and Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform." Mathematical Problems in Engineering 2018 (October 28, 2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.

Повний текст джерела
Анотація:
Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

El Chakik, Abdallah, Abdul Rahman El Sayed, Hassan Alabboud, and Amer Bakkach. "An invariant descriptor map for 3D objects matching." International Journal of Engineering & Technology 9, no. 1 (January 23, 2020): 59. http://dx.doi.org/10.14419/ijet.v9i1.29918.

Повний текст джерела
Анотація:
Meshes and point clouds are traditionally used to represent and match 3D shapes. The matching prob-lem can be formulated as finding the best one-to-one correspondence between featured regions of two shapes. This paper presents an efficient and robust 3D matching method using vertices descriptors de-tection to define feature regions and an optimization approach for regions matching. To do so, we compute an invariant shape descriptor map based on 3D surface patches calculated using Zernike coef-ficients. Then, we propose a multi-scale descriptor map to improve the measured descriptor map quali-ty and to deal with noise. In addition, we introduce a linear algorithm for feature regions segmentation according to the descriptor map. Finally, the matching problem is modelled as sub-graph isomorphism problem, which is a combinatorial optimization problem to match feature regions while preserving the geometric. Finally, we show the robustness and stability of our method through many experimental re-sults with respect to scaling, noise, rotation, and translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ajayi, O. G. "PERFORMANCE ANALYSIS OF SELECTED FEATURE DESCRIPTORS USED FOR AUTOMATIC IMAGE REGISTRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 559–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-559-2020.

Повний текст джерела
Анотація:
Abstract. Automatic detection and extraction of corresponding features is very crucial in the development of an automatic image registration algorithm. Different feature descriptors have been developed and implemented in image registration and other disciplines. These descriptors affect the speed of feature extraction and the measure of extracted conjugate features, which affects the processing speed and overall accuracy of the registration scheme. This article is aimed at reviewing the performance of most-widely implemented feature descriptors in an automatic image registration scheme. Ten (10) descriptors were selected and analysed under seven (7) conditions viz: Invariance to rotation, scale and zoom, their robustness, repeatability, localization and efficiency using UAV acquired images. The analysis shows that though four (4) descriptors performed better than the other Six (6), no single feature descriptor can be affirmed to be the best, as different descriptors perform differently under different conditions. The Modified Harris and Stephen Corner Detector (MHCD) proved to be invariant to scale and zoom while it is excellent in robustness, repeatability, localization and efficiency, but it is variant to rotation. Also, the Scale Invariant feature Transform (SIFT), Speeded Up Robust Features (SURF) and the Maximally Stable Extremal Region (MSER) algorithms proved to be invariant to scale, zoom and rotation, and very good in terms of repeatability, localization and efficiency, though MSER proved to be not as robust as SIFT and SURF. The implication of the findings of this research is that the choice of feature descriptors must be informed by the imaging conditions of the image registration analysts.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tikhomirova, T. A., G. T. Fedorenko, K. M. Nazarenko, and E. S. Nazarenko. "LEFT: LOCAL EDGE FEATURES TRANSFORM." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 189 (March 2020): 11–18. http://dx.doi.org/10.14489/vkit.2020.03.pp.011-018.

Повний текст джерела
Анотація:
To detect point correspondence between images or 3D scenes, local texture descriptors, such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-Up Robust Features), BRIEF (Binary Robust Independent Elementary Features), and others, are usually used. Formally they provide invariance to image rotation and scale, but this properties are achieved only approximately due to discrete number of evaluable orientations and scales stored into the descriptor. Feature points preferable for such descriptors usually are not belong to actual object boundaries into 3D scenes and so are hard to be used into apipolar relationships. At the same time, linking the feature point to large-scale lines and edges is preferable for SLAM (Simultaneous Localization And Mapping) tasks, because their appearance are the most resistible to daily, seasonal and weather variations.In this paper, original feature points descriptor LEFT (Local Edge Features Transform) for edge images are proposed. LEFT accumulate directions and contrasts of alternative strait segments tangent to lines and edges in the vicinity of feature points. Due to this structure, mutual orientation of LEFT descriptors are evaluated and taken into account directly at the stage of their comparison. LEFT descriptors adapt to the shape of contours in the vicinity of feature points, so they can be used to analyze local and global geometric distortions of a various nature. The article presents the results of comparative testing of LEFT and common texture-based descriptors and considers alternative ways of representing them in a computer vision system.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Wanyuan, Tian Zhou, Chao Xu, and Meiqin Liu. "A SIFT-Like Feature Detector and Descriptor for Multibeam Sonar Imaging." Journal of Sensors 2021 (July 15, 2021): 1–14. http://dx.doi.org/10.1155/2021/8845814.

Повний текст джерела
Анотація:
Multibeam imaging sonar has become an increasingly important tool in the field of underwater object detection and description. In recent years, the scale-invariant feature transform (SIFT) algorithm has been widely adopted to obtain stable features of objects in sonar images but does not perform well on multibeam sonar images due to its sensitivity to speckle noise. In this paper, we introduce MBS-SIFT, a SIFT-like feature detector and descriptor for multibeam sonar images. This algorithm contains a feature detector followed by a local feature descriptor. A new gradient definition robust to speckle noise is presented to detect extrema in scale space, and then, interest points are filtered and located. It is also used to assign orientation and generate descriptors of interest points. Simulations and experiments demonstrate that the proposed method can capture features of underwater objects more accurately than existing approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gao, Junchai, and Zhen Sun. "An Improved ASIFT Image Feature Matching Algorithm Based on POS Information." Sensors 22, no. 20 (October 12, 2022): 7749. http://dx.doi.org/10.3390/s22207749.

Повний текст джерела
Анотація:
The affine scale-invariant feature transform (ASIFT) algorithm is a feature extraction algorithm with affinity and scale invariance, which is suitable for image feature matching using unmanned aerial vehicles (UAVs). However, there are many problems in the matching process, such as the low efficiency and mismatching. In order to improve the matching efficiency, this algorithm firstly simulates image distortion based on the position and orientation system (POS) information from real-time UAV measurements to reduce the number of simulated images. Then, the scale-invariant feature transform (SIFT) algorithm is used for feature point detection, and the extracted feature points are combined with the binary robust invariant scalable keypoints (BRISK) descriptor to generate the binary feature descriptor, which is matched using the Hamming distance. Finally, in order to improve the matching accuracy of the UAV images, based on the random sample consensus (RANSAC) a false matching eliminated algorithm is proposed. Through four groups of experiments, the proposed algorithm is compared with the SIFT and ASIFT. The results show that the algorithm can optimize the matching effect and improve the matching speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Barajas-García, Carolina, Selene Solorza-Calderón, and Everardo Gutiérrez-López. "Scale, translation and rotation invariant Wavelet Local Feature Descriptor." Applied Mathematics and Computation 363 (December 2019): 124594. http://dx.doi.org/10.1016/j.amc.2019.124594.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

FERRAZ, CAROLINA TOLEDO, OSMANDO PEREIRA, MARCOS VERDINI ROSA, and ADILSON GONZAGA. "OBJECT RECOGNITION BASED ON BAG OF FEATURES AND A NEW LOCAL PATTERN DESCRIPTOR." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 08 (December 2014): 1455010. http://dx.doi.org/10.1142/s0218001414550106.

Повний текст джерела
Анотація:
Bag of Features (BoF) has gained a lot of interest in computer vision. Visual codebook based on robust appearance descriptors extracted from local image patches is an effective means of texture analysis and scene classification. This paper presents a new method for local feature description based on gray-level difference mapping called Mean Local Mapped Pattern (M-LMP). The proposed descriptor is robust to image scaling, rotation, illumination and partial viewpoint changes. The training set is composed of rotated and scaled images, with changes in illumination and view points. The test set is composed of rotated and scaled images. The proposed descriptor more effectively captures smaller differences of the image pixels than similar ones. In our experiments, we implemented an object recognition system based on the M-LMP and compared our results to the Center-Symmetric Local Binary Pattern (CS-LBP) and the Scale-Invariant Feature Transform (SIFT). The results for object classification were analyzed in a BoF methodology and show that our descriptor performs better compared to these two previously published methods.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Scale Invariant Feature Descriptor"

1

Emir, Erdem. "A Comparative Performance Evaluation Of Scale Invariant Interest Point Detectors For Infrared And Visual Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610159/index.pdf.

Повний текст джерела
Анотація:
In this thesis, the performance of four state-of-the-art feature detectors along with SIFT and SURF descriptors in matching object features of mid-wave infrared, long-wave infrared and visual-band images is evaluated across viewpoints and changing distance conditions. The utilized feature detectors are Scale Invariant Feature Transform (SIFT), multiscale Harris-Laplace, multiscale Hessian-Laplace and Speeded Up Robust Features (SURF) detectors, all of which are invariant to image scale and rotation. Features on different blackbodies, human face and vehicle images are extracted and performance of reliable matching is explored between different views of these objects each in their own category. All of these feature detectors provide good matching performance results in infrared-band images compared with visual-band images. The comparison of matching performance for mid-wave and long-wave infrared images is also explored in this study and it is observed that long-wave infrared images provide good matching performance for objects at lower temperatures, whereas mid-wave infrared-band images provide good matching performance for objects at higher temperatures. The matching performance of SURF detector and descriptor for human face images in long-wave infrared-band is found to be outperforming than other detectors and descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hall, Daniela. "Viewpoint independent recognition of objects from local appearance." Grenoble INPG, 2001. http://www.theses.fr/2001INPG0086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kerr, Dermot. "Autonomous Scale Invariant Feature Extraction." Thesis, University of Ulster, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502896.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Saad, Elhusain Salem. "Defocus Blur-Invariant Scale-Space Feature Extractions." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1418907974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shen, Yao. "Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc84275/.

Повний текст джерела
Анотація:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I demonstrate the performance of the developed algorithm on several scene analysis tasks, including object tracking, video stabilization, medical video segmentation and scene classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Zheng, and 张政. "Passivity assessment and model order reduction for linear time-invariant descriptor systems in VLSI circuit simulation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44909056.

Повний текст джерела
Анотація:
The Best MPhil Thesis in the Faculties of Dentistry, Engineering, Medicine and Science (University of Hong Kong), Li Ka Shing Prize,2009-2010
published_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Accordino, Andrea. "Studio e sviluppo di descrittori locali per nuvole di punti basati su proprietà geometriche." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17919/.

Повний текст джерела
Анотація:
In questo lavoro sono stati proposti due nuovi descrittori per cloud point: ReSHOT e KPL-Descriptor. Inoltre sono state testate delle idee per migliorare le performance di tutta la pipeline di feature matching. Il lavoro comprende una fase di comparazione con i descrittori preesistenti.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lindeberg, Tony. "Scale Selection Properties of Generalized Scale-Space Interest Point Detectors." KTH, Beräkningsbiologi, CB, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101220.

Повний текст джерела
Анотація:
Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.

QC 20121003


Image descriptors and scale-space theory for spatial and spatio-temporal recognition
Стилі APA, Harvard, Vancouver, ISO та ін.
9

May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Повний текст джерела
Анотація:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.

Повний текст джерела
Анотація:
L’objectif de cette thèse est de trouver de nouvelles méthodes de compression sémantique compatible avec un encodeur classique tel que H.264/AVC. . L’objectif principal est de maintenir la sémantique et non pas la qualité globale. Un débit cible de 300 kb/s a été fixé pour des applications de sécurité et de défense Pour cela une chaine complète de compression a dû être réalisée. Une étude et des contributions sur les modèles de saillance spatio-temporel ont été réalisées avec pour objectif d’extraire l’information pertinente. Pour réduire le débit, une méthode de redimensionnement dénommée «seam carving » a été combinée à un encodeur H.264/AVC. En outre, une métrique combinant les points SIFT et le SSIM a été réalisée afin de mesurer la qualité des objets sans être perturbée par les zones de moindre contenant la majorité des artefacts. Une base de données pouvant être utilisée pour des modèles de saillance mais aussi pour de la compression est proposée avec des masques binaires. Les différentes approches ont été validées par divers tests. Une extension de ces travaux pour des applications de résumé vidéo est proposée
The objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Scale Invariant Feature Descriptor"

1

Yue, Sicong, Qing Wang, and Rongchun Zhao. "Robust Wide Baseline Feature Point Matching Based on Scale Invariant Feature Descriptor." In Lecture Notes in Computer Science, 329–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87442-3_42.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Banerjee, Biplab, Tanusree Bhattacharjee, and Nirmalya Chowdhury. "Image Object Classification Using Scale Invariant Feature Transform Descriptor with Support Vector Machine Classifier with Histogram Intersection Kernel." In Information and Communication Technologies, 443–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15766-0_71.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rahman, Md Mahmudur, Sameer K. Antani, and George R. Thoma. "Biomedical Image Retrieval in a Fuzzy Feature Space with Affine Region Detection and Vector Quantization of a Scale-Invariant Descriptor." In Advances in Visual Computing, 261–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17277-9_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kim, So Yeon, Yenewondim Biadgie, and Kyung-Ah Sohn. "Investigating the Effectiveness of E-mail Spam Image Data for Phone Spam Image Detection Using Scale Invariant Feature Transform Image Descriptor." In Lecture Notes in Electrical Engineering, 591–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46578-3_69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Guo, Xiaojie, Xiaochun Cao, Jiawan Zhang, and Xuewei Li. "MIFT: A Mirror Reflection Invariant Feature Descriptor." In Computer Vision – ACCV 2009, 536–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12304-7_50.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 609–64. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6684-9_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 709–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05744-1_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gupta, Raj, and Anurag Mittal. "SMD: A Locally Stable Monotonic Change Invariant Feature Descriptor." In Lecture Notes in Computer Science, 265–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88688-4_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Vardy, Andrew, and Franz Oppacher. "A Scale Invariant Local Image Descriptor for Visual Homing." In Biomimetic Neural Learning for Intelligent Robots, 362–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11521082_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

El-Mashad, Shady Y., and Amin Shoukry. "Towards a Robust Scale Invariant Feature Correspondence." In Lecture Notes in Computer Science, 33–43. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19941-2_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Scale Invariant Feature Descriptor"

1

Chen, Qiu, Koji Kotani, Feifei Lee, and Tadahiro Ohmi. "Scale-Invariant Feature Extraction by VQ-Based Local Image Descriptor." In 2008 International Conference on Computational Intelligence for Modelling Control & Automation. IEEE, 2008. http://dx.doi.org/10.1109/cimca.2008.134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huang, Hui, Lizhong Lu, Bin Yan, and Jian Chen. "A new scale invariant feature detector and modified SURF descriptor." In 2010 Sixth International Conference on Natural Computation (ICNC). IEEE, 2010. http://dx.doi.org/10.1109/icnc.2010.5583377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Miao, Jun, Jun Chu, Guimei Zhang, and Ruina Feng. "A wide baseline matching method based on scale invariant feature descriptor." In Sixth International Symposium on Multispectral Image Processing and Pattern Recognition, edited by Mingyue Ding, Bir Bhanu, Friedrich M. Wahl, and Jonathan Roberts. SPIE, 2009. http://dx.doi.org/10.1117/12.832419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hindmarsh, Samuel, Peter Andreae, and Mengjie Zhang. "Genetic programming for improving image descriptors generated using the scale-invariant feature transform." In the 27th Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2425836.2425855.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Giles, C. Lee, and Arthur D. Fisher. "Architecture for Optical Generation of Invariant Contour Features." In Machine Vision. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/mv.1985.thd6.

Повний текст джерела
Анотація:
Pattern recognition features derived from a pattern's contour can be made invariant1,2 to the pattern's scale, position and rotation. Such features, (for example Fourier Descriptors), usually well characterize a pattern with a reduced feature set (in comparison to the feature set generated directly from two-dimensional gray-level patterns) and avoid, through smart segmentation, the problems of pattern shading. Contour features are significantly more sensitive to small changes in the pattern shape than the two-dimensional gray-level features. However, contour feature calculation is easily disrupted by contour discontinuities. Though contour features have been recently described in a hybrid optical-digital image processing system3, we describe an architecture which optically generates in parallel different classes of invariant contour features and is insensitive to small contour discontinuities. The normalization which generates invariant properties of these features is discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Olszewska, J. I., and D. Wilson. "Hausdorff-distance enhanced matching of Scale Invariant Feature Transform descriptors in context of image querying." In 2012 IEEE 16th International Conference on Intelligent Engineering Systems (INES). IEEE, 2012. http://dx.doi.org/10.1109/ines.2012.6249809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sheng, Y., and H. H. Arsenault. "Invariant recognition using circular Fourier radial-mellin descriptors." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.the3.

Повний текст джерела
Анотація:
An invariant moment method is used to achieve image recognition and classification that is invariant under changes of position, rotation, contrast, and scale. The generalized image descriptors are calculated from circular Fourier radial Mellin transforms, which are radial moments of circular harmonic functions. The radial moments are redundant, and only one or a few orders need be used. The selection of the harmonic order depends on the geometrical properties of the image. The normalization procedure used to obtain scale and contrast invariance yields a weighting effect on the invariant features. Multiclass pattern recognition invariance under changes of position, orientation, scale, and contrast was achieved on some letters using the nearest neighbor rule. Experimental results are shown, including using the classification of images degraded by high levels of noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hu, Shuaizhou, Xinyao Zhang, Hao-yu Liao, Xiao Liang, Minghui Zheng, and Sara Behdad. "Deep Learning and Machine Learning Techniques to Classify Electrical and Electronic Equipment." In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-71403.

Повний текст джерела
Анотація:
Abstract Remanufacturing sites often receive products with different brands, models, conditions, and quality levels. Proper sorting and classification of the waste stream is a primary step in efficiently recovering and handling used products. The correct classification is particularly crucial in future electronic waste (e-waste) management sites equipped with Artificial Intelligence (AI) and robotic technologies. Robots should be enabled with proper algorithms to recognize and classify products with different features and prepare them for assembly and disassembly tasks. In this study, two categories of Machine Learning (ML) and Deep Learning (DL) techniques are used to classify consumer electronics. ML models include Naïve Bayes with Bernoulli, Gaussian, Multinomial distributions, and Support Vector Machine (SVM) algorithms with four kernels of Linear, Radial Basis Function (RBF), Polynomial, and Sigmoid. While DL models include VGG-16, GoogLeNet, Inception-v3, Inception-v4, and ResNet-50. The above-mentioned models are used to classify three laptop brands, including Apple, HP, and ThinkPad. First the Edge Histogram Descriptor (EHD) and Scale Invariant Feature Transform (SIFT) are used to extract features as inputs to ML models for classification. DL models use laptop images without pre-processing on feature extraction. The trained models are slightly overfitting due to the limited dataset and complexity of model parameters. Despite slight overfitting, the models can identify each brand. The findings prove that DL models outperform them of ML. Among DL models, GoogLeNet has the highest performance in identifying the laptop brands.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Moreno-Noguer, Francesc. "Deformation and illumination invariant feature point descriptor." In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011. http://dx.doi.org/10.1109/cvpr.2011.5995529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Geng, Z. X., and Y. Q. Qiao. "An Improved Illumination Invariant SURF Image Feature Descriptor." In 2017 International Conference on Virtual Reality and Visualization (ICVRV). IEEE, 2017. http://dx.doi.org/10.1109/icvrv.2017.00090.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Scale Invariant Feature Descriptor"

1

Lei, Lydia. Three dimensional shape retrieval using scale invariant feature transform and spatial restrictions. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії