Добірка наукової літератури з теми "Invariant Feature Transform"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Invariant Feature Transform".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Invariant Feature Transform"

1

Lindeberg, Tony. "Scale Invariant Feature Transform." Scholarpedia 7, no. 5 (2012): 10491. http://dx.doi.org/10.4249/scholarpedia.10491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Diaz-Escobar, Julia, Vitaly Kober, and Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform." Mathematical Problems in Engineering 2018 (October 28, 2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.

Повний текст джерела
Анотація:
Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

B.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.

Повний текст джерела
Анотація:
This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error (MSE) of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Taha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.

Повний текст джерела
Анотація:
Iris Biometric authentication is considered to be one of the most dependable biometric characteristics for identifying persons. In actuality, iris patterns have invariant, stable, and distinguishing properties for personal identification. Due to its excellent dependability in personal identification, iris recognition has received more attention. Current iris recognition methods give good results especially when NIR and specific capture conditions are used in collaboration with the user. On the other hand, values related to images captured using VW are affected by noise such as blurry images, eye skin, occlusion, and reflection, which negatively affects the overall performance of the recognition systems. In both NIR and visible spectrum iris images, this article presents an effective iris feature extraction strategy based on the scale-invariant feature transform algorithm (SIFT). The proposed method was tested on different databases such as CASIA v1 and ITTD v1, as NIR images, as well as UBIRIS v1 as visible-light color images. The proposed system gave good accuracy rates compared to existing systems, as it gave an accuracy rate of (96.2%) when using CASIA v1 and (96.4%) in ITTD v1, while the system accuracy dropped to (84.0 %) when using UBIRIS v1.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yu, Ying, Cai Lin Dong, Bo Wen Sheng, Wei Dan Zhong, and Xiang Lin Zou. "The New Approach to the Invariant Feature Extraction Using Ridgelet Transform." Applied Mechanics and Materials 651-653 (September 2014): 2241–44. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2241.

Повний текст джерела
Анотація:
With the aim to meet the requirements of multi-directional choice, the paper raise a new approach to the invariant feature extraction of handwritten Chinese characters, with ridgelet transform as its foundation. First of all, the original images will be rotated to the Radon circular shift by means of Radon transform. On the basis of the characteristic that Fourier transform is row shift invariant, then, the one-dimensional Fourier transform will be adopted in the Radon domain to gain the conclusion that magnitude matrixes bear the rotation-invariance as a typical feature, which is pretty beneficial to the invariant feature extraction of rotation. When such is done, one-dimensional wavelet transform will be carried out in the direction of rows, thus achieving perfect choice of frequency, which makes it possible to extract the features of sub-line in the appropriate frequencies. Finally, the average values, standard deviations and the energy values will form the feature vector which is extracted from the ridgelet sub-bands. The approaches mentioned in the paper could satisfy the requirements from the form automatic processing on the recognition of handwritten Chinese characters.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chris, Lina Arlends, Bagus Mulyawan, and Agus Budi Dharmawan. "A Leukocyte Detection System Using Scale Invariant Feature Transform Method." International Journal of Computer Theory and Engineering 8, no. 1 (February 2016): 69–73. http://dx.doi.org/10.7763/ijcte.2016.v8.1022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

CHEN, G. Y., and W. F. XIE. "CONTOUR-BASED FEATURE EXTRACTION USING DUAL-TREE COMPLEX WAVELETS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1233–45. http://dx.doi.org/10.1142/s0218001407005867.

Повний текст джерела
Анотація:
A contour-based feature extraction method is proposed by using the dual-tree complex wavelet transform and the Fourier transform. Features are extracted from the 1D signals r and θ, and hence the processing memory and time are reduced. The approximate shift-invariant property of the dual-tree complex wavelet transform and the Fourier transform guarantee that this method is invariant to translation, rotation and scaling. The method is used to recognize aircrafts from different rotation angles and scaling factors. Experimental results show that it achieves better recognition rates than that which uses only the Fourier features and Granlund's method. Its success is due to the desirable shift invariant property of the dual-tree complex wavelet transform, the translation invariant property of the Fourier spectrum, and our new complete representation of the outer contour of the pattern.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi, and Hwanmyeong Yeo. "Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors." Holzforschung 76, no. 1 (January 1, 2021): 1–13. http://dx.doi.org/10.1515/hf-2021-0051.

Повний текст джерела
Анотація:
Abstract This paper describes feature-based techniques for wood knot classification. For automated classification of macroscopic wood knot images, models were established using artificial neural networks with texture and local feature descriptors, and the performances of feature extraction algorithms were compared. Classification models trained with texture descriptors, gray-level co-occurrence matrix and local binary pattern, achieved better performance than those trained with local feature descriptors, scale-invariant feature transform and dense scale-invariant feature transform. Hence, it was confirmed that wood knot classification was more appropriate for texture classification rather than an approach based on morphological classification. The gray-level co-occurrence matrix produced the highest F1 score despite representing images with relatively low-dimensional feature vectors. The scale-invariant feature transform algorithm could not detect a sufficient number of features from the knot images; hence, the histogram of oriented gradients and dense scale-invariant feature transform algorithms that describe the entire image were better for wood knot classification. The artificial neural network model provided better classification performance than the support vector machine and k-nearest neighbor models, which suggests the suitability of the nonlinear classification model for wood knot classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Huang, Yongdong, Jianwei Yang, Sansan Li, and Wenzhen Du. "Polar radius integral transform for affine invariant feature extraction." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 01 (January 2017): 1750005. http://dx.doi.org/10.1142/s0219691317500059.

Повний текст джерела
Анотація:
Affine transform is to describe the same target at different viewpoints to obtain the relationship between images of approximate model. Affine invariant feature extraction plays an important role in object recognition and image registration. Firstly, the definition of polar radius integral transform (PRIT) is put forward by means of the characterization of affine transform mapping straight line into straight line, where PRIT computes the integral along the polar radius direction and converts images into closed curves which keep the same affine transform with original images. Secondly, in order to extract affine invariant feature, an affine invariant feature extraction algorithm is also given based on PRIT. The proposed algorithm can be used to combine contour-based methods with region-based methods. It has some advantages of fewer amounts of computations and feasibility of feature extraction for objects with several components. Finally, the capability of anti-noise (Gaussian noise, salt and pepper noise) of PRIT is discussed. The simulation experiment results show that PRIT can effectively extract the affine invariant features, but also the low order PRIT has very strong robustness to noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Shu Guang, Shu He, and Xia Yang. "The Application of SIFT Method towards Image Registration." Advanced Materials Research 1044-1045 (October 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.

Повний текст джерела
Анотація:
The scale invariant features transform (SIFT) is commonly used in object recognition,According to the problems of large memory consumption and low computation speed in SIFT (Scale Invariant Feature Transform) algorithm.During the image registration methods based on point features,SIFT point feature is invariant to image scale and rotation, and provides robust matching across a substantial range of affine distortion. Experiments show that on the premise that registration accuracy is stable, the proposed algorithm solves the problem of high requirement of memory and the efficiency is improved greatly, which is applicable for registering remote sensing images of large areas.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Invariant Feature Transform"

1

May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Повний текст джерела
Анотація:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.

Повний текст джерела
Анотація:
L’objectif de cette thèse est de trouver de nouvelles méthodes de compression sémantique compatible avec un encodeur classique tel que H.264/AVC. . L’objectif principal est de maintenir la sémantique et non pas la qualité globale. Un débit cible de 300 kb/s a été fixé pour des applications de sécurité et de défense Pour cela une chaine complète de compression a dû être réalisée. Une étude et des contributions sur les modèles de saillance spatio-temporel ont été réalisées avec pour objectif d’extraire l’information pertinente. Pour réduire le débit, une méthode de redimensionnement dénommée «seam carving » a été combinée à un encodeur H.264/AVC. En outre, une métrique combinant les points SIFT et le SSIM a été réalisée afin de mesurer la qualité des objets sans être perturbée par les zones de moindre contenant la majorité des artefacts. Une base de données pouvant être utilisée pour des modèles de saillance mais aussi pour de la compression est proposée avec des masques binaires. Les différentes approches ont été validées par divers tests. Une extension de ces travaux pour des applications de résumé vidéo est proposée
The objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.

Повний текст джерела
Анотація:
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ljungberg, Malin. "Design of High Performance Computing Software for Genericity and Variability." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sahin, Yavuz. "A Programming Framework To Implement Rule-based Target Detection In Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610213/index.pdf.

Повний текст джерела
Анотація:
An expert system is useful when conventional programming techniques fall short of capturing human expert knowledge and making decisions using this information. In this study, we describe a framework for capturing expert knowledge under a decision tree form and this framework can be used for making decisions based on captured knowledge. The framework proposed in this study is generic and can be used to create domain specific expert systems for different problems. Features are created or processed by the nodes of decision tree and a final conclusion is reached for each feature. Framework supplies 3 types of nodes to construct a decision tree. First type is the decision node, which guides the search path with its answers. Second type is the operator node, which creates new features using the inputs. Last type of node is the end node, which corresponds to a conclusion about a feature. Once the nodes of the tree are developed, then user can interactively create the decision tree and run the supplied inference engine to collect the result on a specific problem. The framework proposed is experimented with two case studies
"
Airport Runway Detection in High Resolution Satellite Images"
and "
Urban Area Detection in High Resolution Satellite Images"
. In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Murtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.

Повний текст джерела
Анотація:
La profondeur possible d’imagerie en laser-scanning microscopie est limitée non seulement par la distance de travail des lentilles de objectifs mais également par la dégradation de l’image causée par une atténuation et une diffraction de la lumière passant à travers l’échantillon. Afin d’étendre cette limite, il est possible, soit de retourner le spécimen pour enregistrer les images depuis chaque côté, or couper progressivement la partie supérieure de l’échantillon au fur et à mesure de l‘acquisition. Les différentes images prises de l’une de ces manières doivent ensuite être combinées pour générer un volume unique. Cependant, des mouvements de l’échantillon durant les procédures d’acquisition engendrent un décalage non seulement sur en translation selon les axes x, y et z mais également en rotation autour de ces même axes, rendant la fusion entres ces multiples images difficile. Nous avons développé une nouvelle approche appelée 2D-SIFT-in-3D-Space utilisant les SIFT (scale Invariant Feature Transform) pour atteindre un recalage robuste en trois dimensions de deux images. Notre méthode recale les images en corrigeant séparément les translations et rotations sur les trois axes grâce à l’extraction et l’association de caractéristiques stables de leurs coupes transversales bidimensionnelles. Pour évaluer la qualité du recalage, nous avons également développé un simulateur d’images de laser-scanning microscopie qui génère une paire d’images 3D virtuelle dans laquelle le niveau de bruit et les angles de rotations entre les angles de rotation sont contrôlés avec des paramètres connus. Pour une concaténation précise et naturelle de deux images, nous avons également développé un module permettant une compensation progressive de la luminosité et du contraste en fonction de la distance à la surface de l’échantillon. Ces outils ont été utilisés avec succès pour l’obtention d’images tridimensionnelles de haute résolution du cerveau de la mouche Drosophila melanogaster, particulièrement des neurones dopaminergiques, octopaminergiques et de leurs synapses. Ces neurones monoamines sont particulièrement important pour le fonctionnement du cerveau et une étude de leur réseau et connectivité est nécessaire pour comprendre leurs interactions. Si une évolution de leur connectivité au cours du temps n’a pas pu être démontrée via l’analyse de la répartition des sites synaptiques, l’étude suggère cependant que l’inactivation de l’un de ces types de neurones entraine des changements drastiques dans le réseau neuronal
Although laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Eskizara, Omer. "3d Geometric Hashing Using Transform Invariant Features." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610546/index.pdf.

Повний текст джерела
Анотація:
3D object recognition is performed by using geometric hashing where transformation and scale invariant 3D surface features are utilized. 3D features are extracted from object surfaces after a scale space search where size of each feature is also estimated. Scale space is constructed based on orientation invariant surface curvature values which classify each surface point'
s shape. Extracted features are grouped into triplets and orientation invariant descriptors are defined for each triplet. Each pose of each object is indexed in a hash table using these triplets. For scale invariance matching, cosine similarity is applied for scale variant triple variables. Tests were performed on Stuttgart database where 66 poses of 42 objects are stored in the hash table during training and 258 poses of 42 objects are used during testing. %90.97 recognition rate is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.

Повний текст джерела
Анотація:
Nous étudions ici l’intérêt des descripteurs locaux pour les images satellites optiques et radar. Ces descripteurs, par leurs invariances et leur représentation compacte, offrent un intérêt pour la comparaison d’images acquises dans des conditions différentes. Facilement applicables aux images optiques, ils offrent des performances limitées sur les images radar, en raison de leur fort bruit multiplicatif. Nous proposons ici un descripteur original pour la comparaison d’images radar. Cet algorithme, appelé SAR-SIFT, repose sur la même structure que l’algorithme SIFT (détection de points-clés et extraction de descripteurs) et offre des performances supérieures pour les images radar. Pour adapter ces étapes au bruit multiplicatif, nous avons développé un opérateur différentiel, le Gradient par Ratio, permettant de calculer une norme et une orientation du gradient robustes à ce type de bruit. Cet opérateur nous a permis de modifier les étapes de l’algorithme SIFT. Nous présentons aussi deux applications pour la télédétection basées sur les descripteurs. En premier, nous estimons une transformation globale entre deux images radar à l’aide de SAR-SIFT. L’estimation est réalisée à l’aide d’un algorithme RANSAC et en utilisant comme points homologues les points-clés mis en correspondance. Enfin nous avons mené une étude prospective sur l’utilisation des descripteurs pour la détection de changements en télédétection. La méthode proposée compare les densités de points-clés mis en correspondance aux densités de points-clés détectés pour mettre en évidence les zones de changement
We study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.

Повний текст джерела
Анотація:
This thesis describes methods of reconstruction of 3D scenes from photographs and videos using the Structure from motion approach. A new software capable of automatic reconstruction of point clouds and polygonal models from common images and videos was implemented based on these methods. The software uses variety of existing and custom solutions and clearly links them into one easily executable application. The reconstruction consists of feature point detection, pairwise matching, Bundle adjustment, stereoscopic algorithms and polygon model creation from point cloud using PCL library. Program is based on Bundler and PMVS. Poisson surface reconstruction algorithm, as well as simple triangulation and own reconstruction method based on plane segmentation were used for polygonal model creation.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Saravi, Sara. "Use of Coherent Point Drift in computer vision applications." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.

Повний текст джерела
Анотація:
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Invariant Feature Transform"

1

Huybrechts, D. Fourier-Mukai Transforms in Algebraic Geometry. Oxford University Press, 2007. http://dx.doi.org/10.1093/acprof:oso/9780199296866.001.0001.

Повний текст джерела
Анотація:
This book provides a systematic exposition of the theory of Fourier-Mukai transforms from an algebro-geometric point of view. Assuming a basic knowledge of algebraic geometry, the key aspect of this book is the derived category of coherent sheaves on a smooth projective variety. The derived category is a subtle invariant of the isomorphism type of a variety, and its group of autoequivalences often shows a rich structure. As it turns out — and this feature is pursued throughout the book — the behaviour of the derived category is determined by the geometric properties of the canonical bundle of the variety. Including notions from other areas, e.g., singular cohomology, Hodge theory, abelian varieties, K3 surfaces; full proofs and exercises are provided. The final chapter summarizes recent research directions, such as connections to orbifolds and the representation theory of finite groups via the McKay correspondence, stability conditions on triangulated categories, and the notion of the derived category of sheaves twisted by a gerbe.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Swendsen, Robert H. An Introduction to Statistical Mechanics and Thermodynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198853237.001.0001.

Повний текст джерела
Анотація:
This is a textbook on statistical mechanics and thermodynamics. It begins with the molecular nature of matter and the fact that we want to describe systems containing many (1020) particles. The first part of the book derives the entropy of the classical ideal gas using only classical statistical mechanics and Boltzmann’s analysis of multiple systems. The properties of this entropy are then expressed as postulates of thermodynamics in the second part of the book. From these postulates, the structure of thermodynamics is developed. Special features are systematic methods for deriving thermodynamic identities using Jacobians, the use of Legendre transforms as a basis for thermodynamic potentials, the introduction of Massieu functions to investigate negative temperatures, and an analysis of the consequences of the Nernst postulate. The third part of the book introduces the canonical and grand canonical ensembles, which are shown to facilitate calculations for many models. An explanation of irreversible phenomena that is consistent with time-reversal invariance in a closed system is presented. The fourth part of the book is devoted to quantum statistical mechanics, including black-body radiation, the harmonic solid, Bose–Einstein and Fermi–Dirac statistics, and an introduction to band theory, including metals, insulators, and semiconductors. The final chapter gives a brief introduction to the theory of phase transitions. Throughout the book, there is a strong emphasis on computational methods to make abstract concepts more concrete.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Invariant Feature Transform"

1

Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 609–64. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6684-9_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yi, Kwang Moo, Eduard Trulls, Vincent Lepetit, and Pascal Fua. "LIFT: Learned Invariant Feature Transform." In Computer Vision – ECCV 2016, 467–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46466-4_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 709–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05744-1_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lim, Naeun, Daejune Ko, Kun Ha Suh, and Eui Chul Lee. "Thumb Biometric Using Scale Invariant Feature Transform." In Lecture Notes in Electrical Engineering, 85–90. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5041-1_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nguyen, Thao, Eun-Ae Park, Jiho Han, Dong-Chul Park, and Soo-Young Min. "Object Detection Using Scale Invariant Feature Transform." In Advances in Intelligent Systems and Computing, 65–72. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01796-9_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jundang, Nattapong, and Sanun Srisuk. "Rotation Invariant Texture Recognition Using Discriminant Feature Transform." In Advances in Visual Computing, 440–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ma, Kun, and Xiaoou Tang. "Translation-Invariant Face Feature Estimation Using Discrete Wavelet Transform." In Wavelet Analysis and Its Applications, 200–210. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45333-4_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cui, Yan, Nils Hasler, Thorsten Thormählen, and Hans-Peter Seidel. "Scale Invariant Feature Transform with Irregular Orientation Histogram Binning." In Lecture Notes in Computer Science, 258–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02611-9_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Das, Bandita, Debabala Swain, Bunil Kumar Balabantaray, Raimoni Hansda, and Vishal Shukla. "Copy-Move Forgery Detection Using Scale Invariant Feature Transform." In Machine Learning and Information Processing, 521–32. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4859-2_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kumar, Raman, and Uffe Kock Wiil. "Enhancing Gadgets for Blinds Through Scale Invariant Feature Transform." In Recent Advances in Computational Intelligence, 149–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12500-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Invariant Feature Transform"

1

Mohtaram, Noureddine, Amina Radgui, Guillaume Caron, and El Mustapha Mouaddib. "Amift: Affine-Mirror Invariant Feature Transform." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Daneshvar, Mohammad Baghery, Massoud Babaie-Zadeh, and Seyed Ghorshi. "Scale Invariant Feature Transform using oriented pattern." In 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2014. http://dx.doi.org/10.1109/ccece.2014.6900952.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Turan, J., L' Ovsenik, and J. Turan. "Architecture of Transform Based Invariant Feature Memory." In 2007 14th International Workshop in Systems, Signals and Image Processing and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services - EC-SIPMCS 2007. IEEE, 2007. http://dx.doi.org/10.1109/iwssip.2007.4381205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rekha, S. S., Y. J. Pavitra, and Prabhakar Mishra. "FPGA implementation of scale invariant feature transform." In 2016 International Conference on Microelectronics, Computing and Communications (MicroCom). IEEE, 2016. http://dx.doi.org/10.1109/microcom.2016.7522483.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chao, Ming-Te, and Yung-Sheng Chen. "Keyboard recognition from scale-invariant feature transform." In 2017 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). IEEE, 2017. http://dx.doi.org/10.1109/icce-china.2017.7991067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhou, Chenglin, and Ye Yuan. "Human body features recognition using 3D scale invariant feature transform." In International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2022), edited by Yuanchang Zhong and Chuanjun Zhao. SPIE, 2023. http://dx.doi.org/10.1117/12.2667373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhi Yuan, Peimin Yan, and Sheng Li. "Super resolution based on scale invariant feature transform." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590265.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hassan, Aeyman. "Scale invariant feature transform evaluation in small dataset." In 2015 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). IEEE, 2015. http://dx.doi.org/10.1109/sta.2015.7505105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yuquan Wang, Guihua Xia, Qidan Zhu, and Tong Wang. "Modified Scale Invariant Feature Transform in omnidirectional images." In 2009 International Conference on Mechatronics and Automation (ICMA). IEEE, 2009. http://dx.doi.org/10.1109/icma.2009.5246708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cruz, Jennifer C. Dela, Ramon G. Garcia, Mikko Ivan D. Avilledo, John Christopher M. Buera, Rom Vincent S. Chan, and Paul Gian T. Espana. "Automated Urine Microscopy Using Scale Invariant Feature Transform." In the 2019 9th International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3326172.3326186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Invariant Feature Transform"

1

Lei, Lydia. Three dimensional shape retrieval using scale invariant feature transform and spatial restrictions. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Perdigão, Rui A. P., and Julia Hall. Spatiotemporal Causality and Predictability Beyond Recurrence Collapse in Complex Coevolutionary Systems. Meteoceanics, November 2020. http://dx.doi.org/10.46337/201111.

Повний текст джерела
Анотація:
Causality and Predictability of Complex Systems pose fundamental challenges even under well-defined structural stochastic-dynamic conditions where the laws of motion and system symmetries are known. However, the edifice of complexity can be profoundly transformed by structural-functional coevolution and non-recurrent elusive mechanisms changing the very same invariants of motion that had been taken for granted. This leads to recurrence collapse and memory loss, precluding the ability of traditional stochastic-dynamic and information-theoretic metrics to provide reliable information about the non-recurrent emergence of fundamental new properties absent from the a priori kinematic geometric and statistical features. Unveiling causal mechanisms and eliciting system dynamic predictability under such challenging conditions is not only a fundamental problem in mathematical and statistical physics, but also one of critical importance to dynamic modelling, risk assessment and decision support e.g. regarding non-recurrent critical transitions and extreme events. In order to address these challenges, generalized metrics in non-ergodic information physics are hereby introduced for unveiling elusive dynamics, causality and predictability of complex dynamical systems undergoing far-from-equilibrium structural-functional coevolution. With these methodological developments at hand, hidden dynamic information is hereby brought out and explicitly quantified even beyond post-critical regime collapse, long after statistical information is lost. The added causal insights and operational predictive value are further highlighted by evaluating the new information metrics among statistically independent variables, where traditional techniques therefore find no information links. Notwithstanding the factorability of the distributions associated to the aforementioned independent variables, synergistic and redundant information are found to emerge from microphysical, event-scale codependencies in far-from-equilibrium nonlinear statistical mechanics. The findings are illustrated to shed light onto fundamental causal mechanisms and unveil elusive dynamic predictability of non-recurrent critical transitions and extreme events across multiscale hydro-climatic problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії