Gotowa bibliografia na temat „Invariant Feature Transform”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Invariant Feature Transform”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Invariant Feature Transform"
Lindeberg, Tony. "Scale Invariant Feature Transform". Scholarpedia 7, nr 5 (2012): 10491. http://dx.doi.org/10.4249/scholarpedia.10491.
Pełny tekst źródłaDiaz-Escobar, Julia, Vitaly Kober i Jose A. Gonzalez-Fraga. "LUIFT: LUminance Invariant Feature Transform". Mathematical Problems in Engineering 2018 (28.10.2018): 1–17. http://dx.doi.org/10.1155/2018/3758102.
Pełny tekst źródłaB.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (23.08.2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.
Pełny tekst źródłaTaha, Mohammed A., Hanaa M. Ahmed i Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)". Webology 19, nr 1 (20.01.2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.
Pełny tekst źródłaYu, Ying, Cai Lin Dong, Bo Wen Sheng, Wei Dan Zhong i Xiang Lin Zou. "The New Approach to the Invariant Feature Extraction Using Ridgelet Transform". Applied Mechanics and Materials 651-653 (wrzesień 2014): 2241–44. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2241.
Pełny tekst źródłaChris, Lina Arlends, Bagus Mulyawan i Agus Budi Dharmawan. "A Leukocyte Detection System Using Scale Invariant Feature Transform Method". International Journal of Computer Theory and Engineering 8, nr 1 (luty 2016): 69–73. http://dx.doi.org/10.7763/ijcte.2016.v8.1022.
Pełny tekst źródłaCHEN, G. Y., i W. F. XIE. "CONTOUR-BASED FEATURE EXTRACTION USING DUAL-TREE COMPLEX WAVELETS". International Journal of Pattern Recognition and Artificial Intelligence 21, nr 07 (listopad 2007): 1233–45. http://dx.doi.org/10.1142/s0218001407005867.
Pełny tekst źródłaHwang, Sung-Wook, Taekyeong Lee, Hyunbin Kim, Hyunwoo Chung, Jong Gyu Choi i Hwanmyeong Yeo. "Classification of wood knots using artificial neural networks with texture and local feature-based image descriptors". Holzforschung 76, nr 1 (1.01.2021): 1–13. http://dx.doi.org/10.1515/hf-2021-0051.
Pełny tekst źródłaHuang, Yongdong, Jianwei Yang, Sansan Li i Wenzhen Du. "Polar radius integral transform for affine invariant feature extraction". International Journal of Wavelets, Multiresolution and Information Processing 15, nr 01 (styczeń 2017): 1750005. http://dx.doi.org/10.1142/s0219691317500059.
Pełny tekst źródłaWu, Shu Guang, Shu He i Xia Yang. "The Application of SIFT Method towards Image Registration". Advanced Materials Research 1044-1045 (październik 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.
Pełny tekst źródłaRozprawy doktorskie na temat "Invariant Feature Transform"
May, Michael. "Data analytics and methods for improved feature selection and matching". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.
Pełny tekst źródłaDecombas, Marc. "Compression vidéo très bas débit par analyse du contenu". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.
Pełny tekst źródłaThe objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.
Pełny tekst źródłaLjungberg, Malin. "Design of High Performance Computing Software for Genericity and Variability". Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7768.
Pełny tekst źródłaSahin, Yavuz. "A Programming Framework To Implement Rule-based Target Detection In Images". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610213/index.pdf.
Pełny tekst źródła"
Airport Runway Detection in High Resolution Satellite Images"
and "
Urban Area Detection in High Resolution Satellite Images"
. In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.
Murtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.
Pełny tekst źródłaAlthough laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Eskizara, Omer. "3d Geometric Hashing Using Transform Invariant Features". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610546/index.pdf.
Pełny tekst źródłas shape. Extracted features are grouped into triplets and orientation invariant descriptors are defined for each triplet. Each pose of each object is indexed in a hash table using these triplets. For scale invariance matching, cosine similarity is applied for scale variant triple variables. Tests were performed on Stuttgart database where 66 poses of 42 objects are stored in the hash table during training and 258 poses of 42 objects are used during testing. %90.97 recognition rate is achieved.
Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications". Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.
Pełny tekst źródłaWe study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Hejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.
Pełny tekst źródłaSaravi, Sara. "Use of Coherent Point Drift in computer vision applications". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Pełny tekst źródłaKsiążki na temat "Invariant Feature Transform"
Huybrechts, D. Fourier-Mukai Transforms in Algebraic Geometry. Oxford University Press, 2007. http://dx.doi.org/10.1093/acprof:oso/9780199296866.001.0001.
Pełny tekst źródłaSwendsen, Robert H. An Introduction to Statistical Mechanics and Thermodynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198853237.001.0001.
Pełny tekst źródłaCzęści książek na temat "Invariant Feature Transform"
Burger, Wilhelm, i Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)". W Texts in Computer Science, 609–64. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6684-9_25.
Pełny tekst źródłaYi, Kwang Moo, Eduard Trulls, Vincent Lepetit i Pascal Fua. "LIFT: Learned Invariant Feature Transform". W Computer Vision – ECCV 2016, 467–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46466-4_28.
Pełny tekst źródłaBurger, Wilhelm, i Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)". W Texts in Computer Science, 709–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05744-1_25.
Pełny tekst źródłaLim, Naeun, Daejune Ko, Kun Ha Suh i Eui Chul Lee. "Thumb Biometric Using Scale Invariant Feature Transform". W Lecture Notes in Electrical Engineering, 85–90. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5041-1_15.
Pełny tekst źródłaNguyen, Thao, Eun-Ae Park, Jiho Han, Dong-Chul Park i Soo-Young Min. "Object Detection Using Scale Invariant Feature Transform". W Advances in Intelligent Systems and Computing, 65–72. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01796-9_7.
Pełny tekst źródłaJundang, Nattapong, i Sanun Srisuk. "Rotation Invariant Texture Recognition Using Discriminant Feature Transform". W Advances in Visual Computing, 440–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_43.
Pełny tekst źródłaMa, Kun, i Xiaoou Tang. "Translation-Invariant Face Feature Estimation Using Discrete Wavelet Transform". W Wavelet Analysis and Its Applications, 200–210. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45333-4_25.
Pełny tekst źródłaCui, Yan, Nils Hasler, Thorsten Thormählen i Hans-Peter Seidel. "Scale Invariant Feature Transform with Irregular Orientation Histogram Binning". W Lecture Notes in Computer Science, 258–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02611-9_26.
Pełny tekst źródłaDas, Bandita, Debabala Swain, Bunil Kumar Balabantaray, Raimoni Hansda i Vishal Shukla. "Copy-Move Forgery Detection Using Scale Invariant Feature Transform". W Machine Learning and Information Processing, 521–32. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4859-2_51.
Pełny tekst źródłaKumar, Raman, i Uffe Kock Wiil. "Enhancing Gadgets for Blinds Through Scale Invariant Feature Transform". W Recent Advances in Computational Intelligence, 149–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12500-4_9.
Pełny tekst źródłaStreszczenia konferencji na temat "Invariant Feature Transform"
Mohtaram, Noureddine, Amina Radgui, Guillaume Caron i El Mustapha Mouaddib. "Amift: Affine-Mirror Invariant Feature Transform". W 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451720.
Pełny tekst źródłaDaneshvar, Mohammad Baghery, Massoud Babaie-Zadeh i Seyed Ghorshi. "Scale Invariant Feature Transform using oriented pattern". W 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2014. http://dx.doi.org/10.1109/ccece.2014.6900952.
Pełny tekst źródłaTuran, J., L' Ovsenik i J. Turan. "Architecture of Transform Based Invariant Feature Memory". W 2007 14th International Workshop in Systems, Signals and Image Processing and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services - EC-SIPMCS 2007. IEEE, 2007. http://dx.doi.org/10.1109/iwssip.2007.4381205.
Pełny tekst źródłaRekha, S. S., Y. J. Pavitra i Prabhakar Mishra. "FPGA implementation of scale invariant feature transform". W 2016 International Conference on Microelectronics, Computing and Communications (MicroCom). IEEE, 2016. http://dx.doi.org/10.1109/microcom.2016.7522483.
Pełny tekst źródłaChao, Ming-Te, i Yung-Sheng Chen. "Keyboard recognition from scale-invariant feature transform". W 2017 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). IEEE, 2017. http://dx.doi.org/10.1109/icce-china.2017.7991067.
Pełny tekst źródłaZhou, Chenglin, i Ye Yuan. "Human body features recognition using 3D scale invariant feature transform". W International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2022), redaktorzy Yuanchang Zhong i Chuanjun Zhao. SPIE, 2023. http://dx.doi.org/10.1117/12.2667373.
Pełny tekst źródłaZhi Yuan, Peimin Yan i Sheng Li. "Super resolution based on scale invariant feature transform". W 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590265.
Pełny tekst źródłaHassan, Aeyman. "Scale invariant feature transform evaluation in small dataset". W 2015 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). IEEE, 2015. http://dx.doi.org/10.1109/sta.2015.7505105.
Pełny tekst źródłaYuquan Wang, Guihua Xia, Qidan Zhu i Tong Wang. "Modified Scale Invariant Feature Transform in omnidirectional images". W 2009 International Conference on Mechatronics and Automation (ICMA). IEEE, 2009. http://dx.doi.org/10.1109/icma.2009.5246708.
Pełny tekst źródłaCruz, Jennifer C. Dela, Ramon G. Garcia, Mikko Ivan D. Avilledo, John Christopher M. Buera, Rom Vincent S. Chan i Paul Gian T. Espana. "Automated Urine Microscopy Using Scale Invariant Feature Transform". W the 2019 9th International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3326172.3326186.
Pełny tekst źródłaRaporty organizacyjne na temat "Invariant Feature Transform"
Lei, Lydia. Three dimensional shape retrieval using scale invariant feature transform and spatial restrictions. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7625.
Pełny tekst źródłaPerdigão, Rui A. P., i Julia Hall. Spatiotemporal Causality and Predictability Beyond Recurrence Collapse in Complex Coevolutionary Systems. Meteoceanics, listopad 2020. http://dx.doi.org/10.46337/201111.
Pełny tekst źródła