Academic literature on the topic 'Sketch-based Image retrieval'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sketch-based Image retrieval.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Sketch-based Image retrieval"
Sivasankaran, Deepika, Sai Seena P, Rajesh R, and Madheswari Kanmani. "Sketch Based Image Retrieval using Deep Learning Based Machine Learning." International Journal of Engineering and Advanced Technology 10, no. 5 (June 30, 2021): 79–86. http://dx.doi.org/10.35940/ijeat.e2622.0610521.
Full textReddy, N. Raghu Ram, Gundreddy Suresh Reddy, and Dr M. Narayana. "Color Sketch Based Image Retrieval." International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 03, no. 09 (September 20, 2014): 12179–85. http://dx.doi.org/10.15662/ijareeie.2014.0309054.
Full textAbdul Baqi, Huda Abdulaali, Ghazali Sulong, Siti Zaiton Mohd Hashim, and Zinah S.Abdul jabar. "Innovative Sketch Board Mining for Online image Retrieval." Modern Applied Science 11, no. 3 (November 22, 2016): 13. http://dx.doi.org/10.5539/mas.v11n3p13.
Full textLei, Haopeng, Simin Chen, Mingwen Wang, Xiangjian He, Wenjing Jia, and Sibo Li. "A New Algorithm for Sketch-Based Fashion Image Retrieval Based on Cross-Domain Transformation." Wireless Communications and Mobile Computing 2021 (May 25, 2021): 1–14. http://dx.doi.org/10.1155/2021/5577735.
Full textSaavedra, Jose M., and Benjamin Bustos. "Sketch-based image retrieval using keyshapes." Multimedia Tools and Applications 73, no. 3 (September 7, 2013): 2033–62. http://dx.doi.org/10.1007/s11042-013-1689-0.
Full textLei, Haopeng, Yugen Yi, Yuhua Li, Guoliang Luo, and Mingwen Wang. "A new clothing image retrieval algorithm based on sketch component segmentation in mobile visual sensors." International Journal of Distributed Sensor Networks 14, no. 11 (November 2018): 155014771881562. http://dx.doi.org/10.1177/1550147718815627.
Full textIKEDA, TAKASHI, and MASAFUMI HAGIWARA. "CONTENT-BASED IMAGE RETRIEVAL SYSTEM USING NEURAL NETWORKS." International Journal of Neural Systems 10, no. 05 (October 2000): 417–24. http://dx.doi.org/10.1142/s0129065700000326.
Full textAdimas, Adimas, and Suhendro Y. Irianto. "Image Sketch Based Criminal Face Recognition Using Content Based Image Retrieval." Scientific Journal of Informatics 8, no. 2 (November 30, 2021): 176–82. http://dx.doi.org/10.15294/sji.v8i2.27865.
Full textZhang, Xianlin, Xueming Li, Xuewei Li, and Mengling Shen. "Better freehand sketch synthesis for sketch-based image retrieval: Beyond image edges." Neurocomputing 322 (December 2018): 38–46. http://dx.doi.org/10.1016/j.neucom.2018.09.047.
Full textChristanti Mawardi, Viny, Yoferen Yoferen, and Stéphane Bressan. "Sketch-Based Image Retrieval with Histogram of Oriented Gradients and Hierarchical Centroid Methods." E3S Web of Conferences 188 (2020): 00026. http://dx.doi.org/10.1051/e3sconf/202018800026.
Full textDissertations / Theses on the topic "Sketch-based Image retrieval"
Saavedra, Rondo José Manuel. "Image Descriptions for Sketch Based Image Retrieval." Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/112670.
Full textDebido al uso masivo de Internet y a la proliferación de dispositivos capaces de generar información multimedia, la búsqueda y recuperación de imágenes basada en contenido se han convertido en áreas de investigación activas en ciencias de la computación. Sin embargo, la aplicación de búsqueda por contenido requiere una imagen de ejemplo como consulta, lo cual muchas veces puede ser un problema serio, que imposibilite la usabilidad de la aplicación. En efecto, los usuarios comúnmente hacen uso de un buscador de imágenes porque no cuentan con la imagen deseada. En este sentido, un modo alternativo de expresar lo que el usuario intenta buscar es mediante un dibujo a mano compuesto, simplemente, de trazos, sketch, lo que onduce a la búsqueda por imágenes basada en sketches. Hacer este tipo de consultas es soportado, además, por el hecho de haberse incrementado la accesibilidad a dispositivos táctiles, facilitando realizar consultas de este tipo. En este trabajo, se proponen dos métodos aplicados a la recuperación de imágenes basada en sketches. El primero es un método global que calcula un histograma de orientaciones usando gradientes cuadrados. Esta propuesta exhibe un comportamiento sobresaliente con respecto a otros métodos globales. En la actualidad, no existen métodos que aprovechen la principal característica de los sketches, la información estructural. Los sketches carecen de color y textura y representan principalmente la estructura de los objetos que se quiere buscar. En este sentido, se propone un segundo método basado en la representación estructural de las imágenes mediante un conjunto de formas primitivas que se denominan keyshapes. Los resultados de nuestra propuesta han sido comparados con resultados de métodos actuales, mostrando un incremento significativo en la efectividad de la recuperación. Además, puesto que nuestra propuesta basada en keyshapes explota una característica novedosa, es posible combinarla con otras técnicas para incrementar la efectividad de los resultados. Así, en este trabajo se ha evaluado la combinación del método propuesto con el método propuesto por Eitz et al., basado en Bag of Words, logrando un aumento de la efectividad de casi 22%. Finalmente, con el objetivo de mostrar el potencial de nuestra propuesta, se muestran dos aplicaciones. La primera está orientada al contexto de recuperación de modelos 3D usando un dibujo a mano como consulta. En esta caso, nuestros resultados muestran competitividad con el estado del arte. La segunda aplicación explota la idea de buscar objetos basada en la estructura para mejorar el proceso de segmentación. En particular, mostramos una aplicación de segmentación de manos en ambientes semi-controlados.
Dey, Sounak. "Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671082.
Full textEl diluvio de contenido visual en Internet, desde contenido generado por el usuario hasta colecciones de imágenes comerciales, motiva nuevos métodos intuitivos para buscar contenido de imágenes digitales: ¿cómo podemos encontrar ciertas imágenes en una base de datos de millones? La recuperación de imágenes basada en bocetos (SBIR) es un tema de investigación emergente en el que se puede usar un dibujo a mano libre para consultar visualmente imágenes fotográficas. SBIR está alineado con las tendencias emergentes para el consumo de contenido visual en dispositivos móviles con pantalla táctil, para los cuales las interacciones gestuales como el boceto son una alternativa natural a la entrada de texto. Esta tesis presenta varias contribuciones a la literatura de SBIR. En primer lugar, proponemos un marco de aprendizaje multimodal que mapea tanto los bocetos como el texto en un espacio de incrustación conjunto invariante al estilo representativo, al tiempo que conserva la semántica. La incrustación resultante permite la comparación directa y la búsqueda entre bocetos / texto e imágenes y se basa en una red neuronal convolucional de múltiples ramas (CNN) entrenada utilizando esquemas de entrenamiento únicos. La incrustación profundamente aprendida muestra un rendimiento de recuperación de última generación en varios puntos de referencia SBIR. En segundo lugar, proponemos un enfoque para la recuperación de imágenes multimodales en imágenes con etiquetas múltiples. Una arquitectura de red profunda multimodal está formulada para modelar conjuntamente bocetos y texto como modalidades de consulta de entrada en un espacio de incrustación común, que luego se alinea aún más con el espacio de características de la imagen. Nuestra arquitectura también se basa en una detección de objetos sobresalientes a través de un modelo de atención visual supervisado basado en LSTM aprendido de las características convolucionales. Tanto la alineación entre las consultas y la imagen como la supervisión de la atención en las imágenes se obtienen generalizando el algoritmo húngaro utilizando diferentes funciones de pérdida. Esto permite codificar las características basadas en objetos y su alineación con la consulta independientemente de la disponibilidad de la concurrencia de diferentes objetos en el conjunto de entrenamiento. Validamos el rendimiento de nuestro enfoque en conjuntos de datos estándar de objeto único / múltiple, mostrando el rendimiento más avanzado en cada conjunto de datos SBIR. En tercer lugar, investigamos el problema de la recuperación de imágenes basadas en bocetos de disparo cero (ZS-SBIR), donde los bocetos humanos se utilizan como consultas para llevar a cabo la recuperación de fotos de categorías invisibles. Avanzamos de manera importante en las técnicas anteriores al proponer un nuevo escenario ZS-SBIR que representa un firme paso adelante en su aplicación práctica. El nuevo entorno reconoce de manera única dos desafíos importantes pero a menudo descuidados de la práctica ZS-SBIR, (i) la gran brecha de dominio entre el boceto aficionado y la foto, y (ii) la necesidad de avanzar hacia la recuperación a gran escala. Primero contribuimos a la comunidad con un nuevo conjunto de datos ZS-SBIR, QuickDraw -Extended, que consta de bocetos de $ 330,000 $ y fotos de $ 204,000 $ que abarcan 110 categorías. Los bocetos humanos aficionados altamente abstractos se obtienen a propósito para maximizar la brecha de dominio, en lugar de los incluidos en los conjuntos de datos existentes que a menudo pueden ser semi-fotorrealistas. Luego formulamos un marco ZS-SBIR para modelar conjuntamente bocetos y fotos en un espacio de incrustación común.
The deluge of visual content on the Internet – from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches/text and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sket-ches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model lear-ned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every SBIR dataset. Third, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of $330,000$ sketches and $204,000$ photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset.
Bui, Tu. "Sketch based image retrieval on big visual data." Thesis, University of Surrey, 2019. http://epubs.surrey.ac.uk/850099/.
Full textKamvysselis, Manolis 1977, and Ovidiu 1975 Marina. "Imagina : a cognitive abstraction approach to sketch-based image retrieval." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/16724.
Full textIncludes bibliographical references (leaves 151-157).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
As digital media become more popular, corporations and individuals gather an increasingly large number of digital images. As a collection grows to more than a few hundred images, the need for search becomes crucial. This thesis is addressing the problem of retrieving from a small database a particular image previously seen by the user. This thesis combines current findings in cognitive science with the knowledge of previous image retrieval systems to present a novel approach to content based image retrieval and indexing. We focus on algorithms which abstract away information from images in the same terms that a viewer abstracts information from an image. The focus in Imagina is on the matching of regions, instead of the matching of global measures. Multiple representations, focusing on shape and color, are used for every region. The matches of individual regions are combined using a saliency metric that accounts for differences in the distributions of metrics. Region matching along with configuration determines the overall match between a query and an image.
by Manolis Kamvysselis and Ovidiu Marina.
S.B.and M.Eng.
Tseng, Kai-Yu, and 曾開瑜. "Sketch-based Image Retrieval on Mobile Devices Using Compact Hash Bits." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/86812733175523374451.
Full text國立臺灣大學
資訊網路與多媒體研究所
100
With the advance of science and technology, touch panels in mobile devices has provided a good platform for mobile sketch search. Moreover, the request of real time application on mobile devices becomes increasingly urgent and most applications are based on large dataset so these dataset should be indexed for efficiency. However, most of previous sketch image retrieval system are usually provided on the server side and simply adopt an inverted index structure on image database, which is formidable to be operated in the limited memory of mobile devices independently. In this paper, we propose a novel approach to address these challenges. First, we effectively utilize distance transform (DT) features and their deformation formula to bridge the gap between manual sketches and natural images. Then these high-dimensional features are further projected to more compact binary hash bits, which can effectively reduce the memory usage and we will compare the performance with different sketch based image retrieval techniques. The experimental results show that our method achieves very competitive retrieval performance with other state of the arts approaches but only requires much less memory storage. Due to its low consumption of memory, the whole system can independently operate on the mobile devices.
Liu, Ching-Hsuan, and 劉璟萱. "Exploiting Word and Visual Word Co-occurrence for Sketch-based Image Retrieval." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/20273244983524928076.
Full text國立臺灣大學
資訊工程學研究所
103
As the increasing popularity of touch-screen devices, retrieving images by hand-drawn sketch has become a trend. Human sketch can easily express some complex user intention such as the object shape. However, sketches are sometimes ambiguous due to different drawing styles and inter-class object shape ambiguity. Although adding text queries as semantic information can help removing the ambiguity of sketch, it requires a huge amount of efforts to annotate text tags to all database images. We propose a method directly model the relationship between text and images by the co-occurrence relationship between words and visual words, which improves traditional sketch-based image retrieval (SBIR), provides a baseline performance and obtains more relevant results in the condition that all images in database do not have any text tag. Experimental results show that our method really can help SBIR to get better retrieval result since it indeed learned semantic meaning from the ``word-visual word" (W-VW) co-occurrence relationship.
Dutta, Titir. "Generalizing Cross-domain Retrieval Algorithms." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5869.
Full textBook chapters on the topic "Sketch-based Image retrieval"
Xia, Yu, Shuangbu Wang, Yanran Li, Lihua You, Xiaosong Yang, and Jian Jun Zhang. "Fine-Grained Color Sketch-Based Image Retrieval." In Advances in Computer Graphics, 424–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22514-8_40.
Full textBhunia, Ayan Kumar, Aneeshan Sain, Parth Hiren Shah, Animesh Gupta, Pinaki Nath Chowdhury, Tao Xiang, and Yi-Zhe Song. "Adaptive Fine-Grained Sketch-Based Image Retrieval." In Lecture Notes in Computer Science, 163–81. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19836-6_10.
Full textSharath Kumar, Y. H., and N. Pavithra. "KD-Tree Approach in Sketch Based Image Retrieval." In Mining Intelligence and Knowledge Exploration, 247–58. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26832-3_24.
Full textBirari, Dipika, Dilendra Hiran, and Vaibhav Narawade. "Survey on Sketch Based Image and Data Retrieval." In Lecture Notes in Electrical Engineering, 285–90. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8715-9_34.
Full textZhang, Xiao, and Xuejin Chen. "Robust Sketch-Based Image Retrieval by Saliency Detection." In MultiMedia Modeling, 515–26. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27671-7_43.
Full textShi, Yufeng, Xinge You, Wenjie Wang, Feng Zheng, Qinmu Peng, and Shuo Wang. "Retrieval by Classification: Discriminative Binary Embedding for Sketch-Based Image Retrieval." In Pattern Recognition and Computer Vision, 15–26. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31726-3_2.
Full textBozas, Konstantinos, and Ebroul Izquierdo. "Large Scale Sketch Based Image Retrieval Using Patch Hashing." In Advances in Visual Computing, 210–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33179-4_21.
Full textParui, Sarthak, and Anurag Mittal. "Similarity-Invariant Sketch-Based Image Retrieval in Large Databases." In Computer Vision – ECCV 2014, 398–414. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10599-4_26.
Full textWang, Tianqi, Liyan Zhang, and Jinhui Tang. "Sketch-Based Image Retrieval with Multiple Binary HoG Descriptor." In Communications in Computer and Information Science, 32–42. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_4.
Full textWu, Xinhui, and Shuangjiu Xiao. "Sketch-Based Image Retrieval via Compact Binary Codes Learning." In Neural Information Processing, 294–306. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04224-0_25.
Full textConference papers on the topic "Sketch-based Image retrieval"
Wang, Shu, and Zhenjiang Miao. "Sketch-based image retrieval using sketch tokens." In 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2015. http://dx.doi.org/10.1109/acpr.2015.7486533.
Full textWang, Zhipeng, Hao Wang, Jiexi Yan, Aming Wu, and Cheng Deng. "Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/158.
Full textLi, Yunfei, and Xiaojing Liu. "Sketch Based Thangka Image Retrieval." In 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2021. http://dx.doi.org/10.1109/iaeac50856.2021.9390657.
Full textKondo, Shin-ichiro, Masahiro Toyoura, and Xiaoyang Mao. "Sketch based skirt image retrieval." In the 4th Joint Symposium. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2630407.2630410.
Full textJiang, Tianbi, Gui-Song Xia, and Qikai Lu. "Sketch-based aerial image retrieval." In 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. http://dx.doi.org/10.1109/icip.2017.8296971.
Full textMatsui, Yusuke, Kiyoharu Aizawa, and Yushi Jing. "Sketch2Manga: Sketch-based manga retrieval." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025626.
Full textChaudhuri, Abhra, Ayan Kumar Bhunia, Yi-Zhe Song, and Anjan Dutta. "Data-Free Sketch-Based Image Retrieval." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01163.
Full textLiu, Li, Fumin Shen, Yuming Shen, Xianglong Liu, and Ling Shao. "Deep Sketch Hashing: Fast Free-Hand Sketch-Based Image Retrieval." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.247.
Full textSeddati, Omar, Stéphane Dupont, and Saïd Mahmoudi. "Quadruplet Networks for Sketch-Based Image Retrieval." In ICMR '17: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078971.3078985.
Full textXiao, Changcheng, Changhu Wang, Liqing Zhang, and Lei Zhang. "Sketch-based Image Retrieval via Shape Words." In ICMR '15: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2671188.2749360.
Full text