Добірка наукової літератури з теми "Invariant Feature Transform"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Invariant Feature Transform".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Invariant Feature Transform"
Lindeberg, Tony. "Scale Invariant Feature Transform." Scholarpedia 7, no. 5 (2012): 10491. http://dx.doi.org/10.4249/scholarpedia.10491.
Повний текст джерелаDiaz-Escobar, Julia, Vitaly Kober, and Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform." Mathematical Problems in Engineering 2018 (October 28, 2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.
Повний текст джерелаB.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.
Повний текст джерелаTaha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.
Повний текст джерелаYu, Ying, Cai Lin Dong, Bo Wen Sheng, Wei Dan Zhong, and Xiang Lin Zou. "The New Approach to the Invariant Feature Extraction Using Ridgelet Transform." Applied Mechanics and Materials 651-653 (September 2014): 2241–44. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2241.
Повний текст джерелаChris, Lina Arlends, Bagus Mulyawan, and Agus Budi Dharmawan. "A Leukocyte Detection System Using Scale Invariant Feature Transform Method." International Journal of Computer Theory and Engineering 8, no. 1 (February 2016): 69–73. http://dx.doi.org/10.7763/ijcte.2016.v8.1022.
Повний текст джерелаCHEN, G. Y., and W. F. XIE. "CONTOUR-BASED FEATURE EXTRACTION USING DUAL-TREE COMPLEX WAVELETS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1233–45. http://dx.doi.org/10.1142/s0218001407005867.
Повний текст джерелаHwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi, and Hwanmyeong Yeo. "Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors." Holzforschung 76, no. 1 (January 1, 2021): 1–13. http://dx.doi.org/10.1515/hf-2021-0051.
Повний текст джерелаHuang, Yongdong, Jianwei Yang, Sansan Li, and Wenzhen Du. "Polar radius integral transform for affine invariant feature extraction." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 01 (January 2017): 1750005. http://dx.doi.org/10.1142/s0219691317500059.
Повний текст джерелаWu, Shu Guang, Shu He, and Xia Yang. "The Application of SIFT Method towards Image Registration." Advanced Materials Research 1044-1045 (October 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.
Повний текст джерелаДисертації з теми "Invariant Feature Transform"
May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.
Повний текст джерелаDecombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.
Повний текст джерелаThe objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.
Повний текст джерелаLjungberg, Malin. "Design of High Performance Computing Software for Genericity and Variability." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7768.
Повний текст джерелаSahin, Yavuz. "A Programming Framework To Implement Rule-based Target Detection In Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610213/index.pdf.
Повний текст джерела"
Airport Runway Detection in High Resolution Satellite Images"
and "
Urban Area Detection in High Resolution Satellite Images"
. In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.
Murtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.
Повний текст джерелаAlthough laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Eskizara, Omer. "3d Geometric Hashing Using Transform Invariant Features." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610546/index.pdf.
Повний текст джерелаs shape. Extracted features are grouped into triplets and orientation invariant descriptors are defined for each triplet. Each pose of each object is indexed in a hash table using these triplets. For scale invariance matching, cosine similarity is applied for scale variant triple variables. Tests were performed on Stuttgart database where 66 poses of 42 objects are stored in the hash table during training and 258 poses of 42 objects are used during testing. %90.97 recognition rate is achieved.
Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.
Повний текст джерелаWe study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.
Повний текст джерелаSaravi, Sara. "Use of Coherent Point Drift in computer vision applications." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Повний текст джерелаКниги з теми "Invariant Feature Transform"
Huybrechts, D. Fourier-Mukai Transforms in Algebraic Geometry. Oxford University Press, 2007. http://dx.doi.org/10.1093/acprof:oso/9780199296866.001.0001.
Повний текст джерелаSwendsen, Robert H. An Introduction to Statistical Mechanics and Thermodynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198853237.001.0001.
Повний текст джерелаЧастини книг з теми "Invariant Feature Transform"
Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 609–64. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6684-9_25.
Повний текст джерелаYi, Kwang Moo, Eduard Trulls, Vincent Lepetit, and Pascal Fua. "LIFT: Learned Invariant Feature Transform." In Computer Vision – ECCV 2016, 467–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46466-4_28.
Повний текст джерелаBurger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 709–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05744-1_25.
Повний текст джерелаLim, Naeun, Daejune Ko, Kun Ha Suh, and Eui Chul Lee. "Thumb Biometric Using Scale Invariant Feature Transform." In Lecture Notes in Electrical Engineering, 85–90. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5041-1_15.
Повний текст джерелаNguyen, Thao, Eun-Ae Park, Jiho Han, Dong-Chul Park, and Soo-Young Min. "Object Detection Using Scale Invariant Feature Transform." In Advances in Intelligent Systems and Computing, 65–72. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01796-9_7.
Повний текст джерелаJundang, Nattapong, and Sanun Srisuk. "Rotation Invariant Texture Recognition Using Discriminant Feature Transform." In Advances in Visual Computing, 440–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_43.
Повний текст джерелаMa, Kun, and Xiaoou Tang. "Translation-Invariant Face Feature Estimation Using Discrete Wavelet Transform." In Wavelet Analysis and Its Applications, 200–210. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45333-4_25.
Повний текст джерелаCui, Yan, Nils Hasler, Thorsten Thormählen, and Hans-Peter Seidel. "Scale Invariant Feature Transform with Irregular Orientation Histogram Binning." In Lecture Notes in Computer Science, 258–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02611-9_26.
Повний текст джерелаDas, Bandita, Debabala Swain, Bunil Kumar Balabantaray, Raimoni Hansda, and Vishal Shukla. "Copy-Move Forgery Detection Using Scale Invariant Feature Transform." In Machine Learning and Information Processing, 521–32. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4859-2_51.
Повний текст джерелаKumar, Raman, and Uffe Kock Wiil. "Enhancing Gadgets for Blinds Through Scale Invariant Feature Transform." In Recent Advances in Computational Intelligence, 149–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12500-4_9.
Повний текст джерелаТези доповідей конференцій з теми "Invariant Feature Transform"
Mohtaram, Noureddine, Amina Radgui, Guillaume Caron, and El Mustapha Mouaddib. "Amift: Affine-Mirror Invariant Feature Transform." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451720.
Повний текст джерелаDaneshvar, Mohammad Baghery, Massoud Babaie-Zadeh, and Seyed Ghorshi. "Scale Invariant Feature Transform using oriented pattern." In 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2014. http://dx.doi.org/10.1109/ccece.2014.6900952.
Повний текст джерелаTuran, J., L' Ovsenik, and J. Turan. "Architecture of Transform Based Invariant Feature Memory." In 2007 14th International Workshop in Systems, Signals and Image Processing and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services - EC-SIPMCS 2007. IEEE, 2007. http://dx.doi.org/10.1109/iwssip.2007.4381205.
Повний текст джерелаRekha, S. S., Y. J. Pavitra, and Prabhakar Mishra. "FPGA implementation of scale invariant feature transform." In 2016 International Conference on Microelectronics, Computing and Communications (MicroCom). IEEE, 2016. http://dx.doi.org/10.1109/microcom.2016.7522483.
Повний текст джерелаChao, Ming-Te, and Yung-Sheng Chen. "Keyboard recognition from scale-invariant feature transform." In 2017 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). IEEE, 2017. http://dx.doi.org/10.1109/icce-china.2017.7991067.
Повний текст джерелаZhou, Chenglin, and Ye Yuan. "Human body features recognition using 3D scale invariant feature transform." In International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2022), edited by Yuanchang Zhong and Chuanjun Zhao. SPIE, 2023. http://dx.doi.org/10.1117/12.2667373.
Повний текст джерелаZhi Yuan, Peimin Yan, and Sheng Li. "Super resolution based on scale invariant feature transform." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590265.
Повний текст джерелаHassan, Aeyman. "Scale invariant feature transform evaluation in small dataset." In 2015 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). IEEE, 2015. http://dx.doi.org/10.1109/sta.2015.7505105.
Повний текст джерелаYuquan Wang, Guihua Xia, Qidan Zhu, and Tong Wang. "Modified Scale Invariant Feature Transform in omnidirectional images." In 2009 International Conference on Mechatronics and Automation (ICMA). IEEE, 2009. http://dx.doi.org/10.1109/icma.2009.5246708.
Повний текст джерелаCruz, Jennifer C. Dela, Ramon G. Garcia, Mikko Ivan D. Avilledo, John Christopher M. Buera, Rom Vincent S. Chan, and Paul Gian T. Espana. "Automated Urine Microscopy Using Scale Invariant Feature Transform." In the 2019 9th International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3326172.3326186.
Повний текст джерелаЗвіти організацій з теми "Invariant Feature Transform"
Lei, Lydia. Three dimensional shape retrieval using scale invariant feature transform and spatial restrictions. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7625.
Повний текст джерелаPerdigão, Rui A. P., and Julia Hall. Spatiotemporal Causality and Predictability Beyond Recurrence Collapse in Complex Coevolutionary Systems. Meteoceanics, November 2020. http://dx.doi.org/10.46337/201111.
Повний текст джерела