Gotowa bibliografia na temat „Sketch-based Image retrieval”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sketch-based Image retrieval”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Sketch-based Image retrieval"

1

Sivasankaran, Deepika, Sai Seena P, Rajesh R i Madheswari Kanmani. "Sketch Based Image Retrieval using Deep Learning Based Machine Learning". International Journal of Engineering and Advanced Technology 10, nr 5 (30.06.2021): 79–86. http://dx.doi.org/10.35940/ijeat.e2622.0610521.

Pełny tekst źródła
Streszczenie:
Sketch based image retrieval (SBIR) is a sub-domain of Content Based Image Retrieval(CBIR) where the user provides a drawing as an input to obtain i.e retrieve images relevant to the drawing given. The main challenge in SBIR is the subjectivity of the drawings drawn by the user as it entirely relies on the user's ability to express information in hand-drawn form. Since many of the SBIR models created aim at using singular input sketch and retrieving photos based on the given single sketch input, our project aims to enable detection and extraction of multiple sketches given together as a single input sketch image. The features are extracted from individual sketches obtained using deep learning architectures such as VGG16 , and classified to its type based on supervised machine learning using Support Vector Machines. Based on the class obtained, photos are retrieved from the database using an opencv library, CVLib , which finds the objects present in a photo image. From the number of components obtained in each photo, a ranking function is performed to rank the retrieved photos, which are then displayed to the user starting from the highest order of ranking up to the least. The system consisting of VGG16 and SVM provides 89% accuracy
Style APA, Harvard, Vancouver, ISO itp.
2

Reddy, N. Raghu Ram, Gundreddy Suresh Reddy i Dr M. Narayana. "Color Sketch Based Image Retrieval". International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 03, nr 09 (20.09.2014): 12179–85. http://dx.doi.org/10.15662/ijareeie.2014.0309054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Abdul Baqi, Huda Abdulaali, Ghazali Sulong, Siti Zaiton Mohd Hashim i Zinah S.Abdul jabar. "Innovative Sketch Board Mining for Online image Retrieval". Modern Applied Science 11, nr 3 (22.11.2016): 13. http://dx.doi.org/10.5539/mas.v11n3p13.

Pełny tekst źródła
Streszczenie:
Developing an accurate and efficient Sketch-Based Image Retrieval (SBIR) method in determining the resemblances between the user's query and image stream has been a never-ending quest in digital data communication era. The main challenge is to overcome the asymmetry between a binary sketch and a full-color image. We introduce a unique sketch board mining method to recover the online web images. This image conceptual retrieval is performed by matching the sketch query with the relevant terminology of selected images. A systematic sequence is followed, including the sketch drawing by the user in interpreting its geometrical shape of the conceptual form based on annotation metadata matching technique achieved automatically from Google engines, indexing and clustering the selected images via data mining. The sketch mining board being built in dynamic drawing state used a set of features to generalize sketch board conceptualization in semantic level. Images from the global repository are retrieved via a semantic match of the user's sketch query with them. Excellent retrieval of hand-drawn sketches is found to achieve the recall rate within 0.1 to 0.8 and a precision rate is 0.7 to 0.98. The proposed technique solved many problems that stat-of-art suffered from SBIR (e.g. scaling, transport, imperfect) sketch. Furthermore, it is demonstrated that the proposed technique allowed us to exploit high-level features to search the web effectively and may constitute a basis for efficient and precise image recovery tool.
Style APA, Harvard, Vancouver, ISO itp.
4

Lei, Haopeng, Simin Chen, Mingwen Wang, Xiangjian He, Wenjing Jia i Sibo Li. "A New Algorithm for Sketch-Based Fashion Image Retrieval Based on Cross-Domain Transformation". Wireless Communications and Mobile Computing 2021 (25.05.2021): 1–14. http://dx.doi.org/10.1155/2021/5577735.

Pełny tekst źródła
Streszczenie:
Due to the rise of e-commerce platforms, online shopping has become a trend. However, the current mainstream retrieval methods are still limited to using text or exemplar images as input. For huge commodity databases, it remains a long-standing unsolved problem for users to find the interested products quickly. Different from the traditional text-based and exemplar-based image retrieval techniques, sketch-based image retrieval (SBIR) provides a more intuitive and natural way for users to specify their search need. Due to the large cross-domain discrepancy between the free-hand sketch and fashion images, retrieving fashion images by sketches is a significantly challenging task. In this work, we propose a new algorithm for sketch-based fashion image retrieval based on cross-domain transformation. In our approach, the sketch and photo are first transformed into the same domain. Then, the sketch domain similarity and the photo domain similarity are calculated, respectively, and fused to improve the retrieval accuracy of fashion images. Moreover, the existing fashion image datasets mostly contain photos only and rarely contain the sketch-photo pairs. Thus, we contribute a fine-grained sketch-based fashion image retrieval dataset, which includes 36,074 sketch-photo pairs. Specifically, when retrieving on our Fashion Image dataset, the accuracy of our model ranks the correct match at the top-1 which is 96.6%, 92.1%, 91.0%, and 90.5% for clothes, pants, skirts, and shoes, respectively. Extensive experiments conducted on our dataset and two fine-grained instance-level datasets, i.e., QMUL-shoes and QMUL-chairs, show that our model has achieved a better performance than other existing methods.
Style APA, Harvard, Vancouver, ISO itp.
5

Saavedra, Jose M., i Benjamin Bustos. "Sketch-based image retrieval using keyshapes". Multimedia Tools and Applications 73, nr 3 (7.09.2013): 2033–62. http://dx.doi.org/10.1007/s11042-013-1689-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lei, Haopeng, Yugen Yi, Yuhua Li, Guoliang Luo i Mingwen Wang. "A new clothing image retrieval algorithm based on sketch component segmentation in mobile visual sensors". International Journal of Distributed Sensor Networks 14, nr 11 (listopad 2018): 155014771881562. http://dx.doi.org/10.1177/1550147718815627.

Pełny tekst źródła
Streszczenie:
Nowadays, the state-of-the-art mobile visual sensors technology makes it easy to collect a great number of clothing images. Accordingly, there is an increasing demand for a new efficient method to retrieve clothing images by using mobile visual sensors. Different from traditional keyword-based and content-based image retrieval techniques, sketch-based image retrieval provides a more intuitive and natural way for users to clarify their search need. However, this is a challenging problem due to the large discrepancy between sketches and images. To tackle this problem, we present a new sketch-based clothing image retrieval algorithm based on sketch component segmentation. The proposed strategy is to first collect a large scale of clothing sketches and images and tag with semantic component labels for training dataset, and then, we employ conditional random field model to train a classifier which is used to segment query sketch into different components. After that, several feature descriptors are fused to describe each component and capture the topological information. Finally, a dynamic component-weighting strategy is established to boost the effect of important components when measuring similarities. The approach is evaluated on a large, real-world clothing image dataset, and experimental results demonstrate the effectiveness and good performance of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
7

IKEDA, TAKASHI, i MASAFUMI HAGIWARA. "CONTENT-BASED IMAGE RETRIEVAL SYSTEM USING NEURAL NETWORKS". International Journal of Neural Systems 10, nr 05 (październik 2000): 417–24. http://dx.doi.org/10.1142/s0129065700000326.

Pełny tekst źródła
Streszczenie:
An effective image retrieval system is developed based on the use of neural networks (NNs). It takes advantages of association ability of multilayer NNs as matching engines which calculate similarities between a user's drawn sketch and the stored images. The NNs memorize pixel information of every size-reduced image (thumbnail) in the learning phase. In the retrieval phase, pixel information of a user's drawn rough sketch is inputted to the learned NNs and they estimate the candidates. Thus the system can retrieve candidates quickly and correctly by utilizing the parallelism and association ability of NNs. In addition, the system has learning capability: it can automatically extract features of a user's drawn sketch during the retrieval phase and can store them as additional information to improve the performance. The software for querying, including efficient graphical user interfaces, has been implemented and tested. The effectiveness of the proposed system has been investigated through various experimental tests.
Style APA, Harvard, Vancouver, ISO itp.
8

Adimas, Adimas, i Suhendro Y. Irianto. "Image Sketch Based Criminal Face Recognition Using Content Based Image Retrieval". Scientific Journal of Informatics 8, nr 2 (30.11.2021): 176–82. http://dx.doi.org/10.15294/sji.v8i2.27865.

Pełny tekst źródła
Streszczenie:
Purpose: Face recognition is a geometric space recording activity that allows it to be used to distinguish the features of a face. Therefore, facial recognition can be used to identify ID cards, ATM card PINs, search for one’s committed crimes, terrorists, and other criminals whose faces were not caught by Close-Circuit Television (CCTV). Based on the face image database and by applying the Content-Base Image Retrieval method (CBIR), committed crimes can be recognized on his face. Moreover, the image segmentation technique was carried out before CBIR was applied. This work tried to recognize an individual who committed crimes based on his or her face by using sketch facial images as a query. Methods: We used an image sketch as a querybecause CCTV could not have caught the face image. The research used no less than 1,000 facial images were carried out, both normal as well asabnormal faces (with obstacles). Findings:Experiments demonstrated good enough in terms of precision and recall, which are 0,8 and 0,3 respectively, which is better than at least two previous works.The work demonstrates a precision of 80% which means retrieval of effectiveness is good enough. The 75 queries were carried out in this work to compute the precision and recall of image retrieval. Novelty: Most face recognition researchers using CBIR employed an image as a query. Furthermore, previous work still rarely applied image segmentation as well as CBIR.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Xianlin, Xueming Li, Xuewei Li i Mengling Shen. "Better freehand sketch synthesis for sketch-based image retrieval: Beyond image edges". Neurocomputing 322 (grudzień 2018): 38–46. http://dx.doi.org/10.1016/j.neucom.2018.09.047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Christanti Mawardi, Viny, Yoferen Yoferen i Stéphane Bressan. "Sketch-Based Image Retrieval with Histogram of Oriented Gradients and Hierarchical Centroid Methods". E3S Web of Conferences 188 (2020): 00026. http://dx.doi.org/10.1051/e3sconf/202018800026.

Pełny tekst źródła
Streszczenie:
Searching images from digital image dataset can be done using sketch-based image retrieval that performs retrieval based on the similarity between dataset images and sketch image input. Preprocessing is done by using Canny Edge Detection to detect edges of dataset images. Feature extraction will be done using Histogram of Oriented Gradients and Hierarchical Centroid on the sketch image and all the preprocessed dataset images. The features distance between sketch image and all dataset images is calculated by Euclidean Distance. Dataset images used in the test consist of 10 classes. The test results show Histogram of Oriented Gradients, Hierarchical Centroid, and combination of both methods with low and high threshold of 0.05 and 0.5 have average precision and recall values of 90.8 % and 13.45 %, 70 % and 10.64 %, 91.4 % and 13.58 %. The average precision and recall values with low and high threshold of 0.01 and 0.1, 0.3 and 0.7 are 87.2 % and 13.19 %, 86.7 % and 12.57 %. Combination of the Histogram of Oriented Gradients and Hierarchical Centroid methods with low and high threshold of 0.05 and 0.5 produce better retrieval results than using the method individually or using other low and high threshold.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Sketch-based Image retrieval"

1

Saavedra, Rondo José Manuel. "Image Descriptions for Sketch Based Image Retrieval". Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/112670.

Pełny tekst źródła
Streszczenie:
Doctor en Ciencias, Mención Computación
Debido al uso masivo de Internet y a la proliferación de dispositivos capaces de generar información multimedia, la búsqueda y recuperación de imágenes basada en contenido se han convertido en áreas de investigación activas en ciencias de la computación. Sin embargo, la aplicación de búsqueda por contenido requiere una imagen de ejemplo como consulta, lo cual muchas veces puede ser un problema serio, que imposibilite la usabilidad de la aplicación. En efecto, los usuarios comúnmente hacen uso de un buscador de imágenes porque no cuentan con la imagen deseada. En este sentido, un modo alternativo de expresar lo que el usuario intenta buscar es mediante un dibujo a mano compuesto, simplemente, de trazos, sketch, lo que onduce a la búsqueda por imágenes basada en sketches. Hacer este tipo de consultas es soportado, además, por el hecho de haberse incrementado la accesibilidad a dispositivos táctiles, facilitando realizar consultas de este tipo. En este trabajo, se proponen dos métodos aplicados a la recuperación de imágenes basada en sketches. El primero es un método global que calcula un histograma de orientaciones usando gradientes cuadrados. Esta propuesta exhibe un comportamiento sobresaliente con respecto a otros métodos globales. En la actualidad, no existen métodos que aprovechen la principal característica de los sketches, la información estructural. Los sketches carecen de color y textura y representan principalmente la estructura de los objetos que se quiere buscar. En este sentido, se propone un segundo método basado en la representación estructural de las imágenes mediante un conjunto de formas primitivas que se denominan keyshapes. Los resultados de nuestra propuesta han sido comparados con resultados de métodos actuales, mostrando un incremento significativo en la efectividad de la recuperación. Además, puesto que nuestra propuesta basada en keyshapes explota una característica novedosa, es posible combinarla con otras técnicas para incrementar la efectividad de los resultados. Así, en este trabajo se ha evaluado la combinación del método propuesto con el método propuesto por Eitz et al., basado en Bag of Words, logrando un aumento de la efectividad de casi 22%. Finalmente, con el objetivo de mostrar el potencial de nuestra propuesta, se muestran dos aplicaciones. La primera está orientada al contexto de recuperación de modelos 3D usando un dibujo a mano como consulta. En esta caso, nuestros resultados muestran competitividad con el estado del arte. La segunda aplicación explota la idea de buscar objetos basada en la estructura para mejorar el proceso de segmentación. En particular, mostramos una aplicación de segmentación de manos en ambientes semi-controlados.
Style APA, Harvard, Vancouver, ISO itp.
2

Dey, Sounak. "Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval". Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671082.

Pełny tekst źródła
Streszczenie:
El diluvi de contingut visual a Internet –de contingut generat per l’usuari a col·leccions d’imatges comercials- motiva nous mètodes intuïtius per cercar contingut d’imatges digitals: com podem trobar determinades imatges en una base de dades de milions? La recuperació d’imatges basada en esbossos (SBIR) és un tema de recerca emergent en què es pot utilitzar un dibuix a mà lliure per consultar visualment imatges fotogràfiques. SBIR s’alinea a les tendències emergents de consum de contingut visual en dispositius mòbils basats en pantalla tàctil, per a les quals les interaccions gestuals com el croquis són una alternativa natural a l’entrada textual. Aquesta tesi presenta diverses contribucions a la literatura de SBIR. En primer lloc, proposem un marc d’aprenentatge entre modalitats que mapi tant esbossos com text en un espai d’inserció conjunta invariant a l’estil representatiu, conservant la semàntica. L’incrustació resultant permet la comparació directa i la cerca entre esbossos / text i imatges i es basa en una xarxa neuronal convolutional multi-branca (CNN) formada mitjançant esquemes d’entrenament únics. S’ha demostrat que l’incorporació profundament obtinguda ofereix un rendiment de recuperació d’última generació en diversos punts de referència SBIR. En segon lloc, proposem un enfocament per a la recuperació d’imatges multimodals en imatges amb etiquetes múltiples. Es formula una arquitectura de xarxa profunda multi-modal per modelar conjuntament esbossos i text com a modalitats de consulta d’entrada en un espai d’inscripció comú, que s’alinea encara més amb l’espai de funcions d’imatge. La nostra arquitectura també es basa en una detecció d’objectes destacables mitjançant un model d’atenció visual basat en LSTM supervisat, obtingut de funcions convolutives. Tant l’alineació entre les consultes com la imatge i la supervisió de l’atenció a les imatges s’obté generalitzant l’algoritme hongarès mitjançant diferents funcions de pèrdua. Això permet codificar les funcions basades en l’objecte i la seva alineació amb la consulta independentment de la disponibilitat de la coincidència de diferents objectes del conjunt d’entrenament. Validem el rendiment del nostre enfocament en conjunts de dades d’un sol objecte o amb diversos objectes, mostrant el rendiment més modern en tots els conjunts de dades SBIR. En tercer lloc, investiguem el problema de la recuperació d’imatges basada en esbossos de zero (ZS-SBIR), on els esbossos humans s’utilitzen com a consultes per a la recuperació de fotografies de categories no vistes. Avancem de forma important les arts prèvies proposant un nou escenari ZS-SBIR que representi un pas endavant en la seva aplicació pràctica. El nou entorn reconeix exclusivament dos importants reptes importants, però sovint descuidats, de la pràctica ZS-SBIR, (i) la gran bretxa de domini entre el dibuix i la fotografia aficionats, i (ii) la necessitat d’avançar cap a una recuperació a gran escala. Primer cop aportem a la comunitat un nou conjunt de dades ZS-SBIR, QuickDraw-Extended, que consisteix en esbossos de 330.000 dòlars i 204.000 dòlars de fotos en 110 categories. Esbossos humans amateurs altament abstractes s’obtenen intencionadament per maximitzar la bretxa de domini, en lloc dels inclosos en conjunts de dades existents que sovint poden ser semi-fotorealistes. A continuació, formulem un marc ZS-SBIR per modelar conjuntament esbossos i fotografies en un espai d’inserció comú. Una nova estratègia per extreure la informació mútua entre dominis està dissenyada específicament per pal·liar la bretxa de domini.
El diluvio de contenido visual en Internet, desde contenido generado por el usuario hasta colecciones de imágenes comerciales, motiva nuevos métodos intuitivos para buscar contenido de imágenes digitales: ¿cómo podemos encontrar ciertas imágenes en una base de datos de millones? La recuperación de imágenes basada en bocetos (SBIR) es un tema de investigación emergente en el que se puede usar un dibujo a mano libre para consultar visualmente imágenes fotográficas. SBIR está alineado con las tendencias emergentes para el consumo de contenido visual en dispositivos móviles con pantalla táctil, para los cuales las interacciones gestuales como el boceto son una alternativa natural a la entrada de texto. Esta tesis presenta varias contribuciones a la literatura de SBIR. En primer lugar, proponemos un marco de aprendizaje multimodal que mapea tanto los bocetos como el texto en un espacio de incrustación conjunto invariante al estilo representativo, al tiempo que conserva la semántica. La incrustación resultante permite la comparación directa y la búsqueda entre bocetos / texto e imágenes y se basa en una red neuronal convolucional de múltiples ramas (CNN) entrenada utilizando esquemas de entrenamiento únicos. La incrustación profundamente aprendida muestra un rendimiento de recuperación de última generación en varios puntos de referencia SBIR. En segundo lugar, proponemos un enfoque para la recuperación de imágenes multimodales en imágenes con etiquetas múltiples. Una arquitectura de red profunda multimodal está formulada para modelar conjuntamente bocetos y texto como modalidades de consulta de entrada en un espacio de incrustación común, que luego se alinea aún más con el espacio de características de la imagen. Nuestra arquitectura también se basa en una detección de objetos sobresalientes a través de un modelo de atención visual supervisado basado en LSTM aprendido de las características convolucionales. Tanto la alineación entre las consultas y la imagen como la supervisión de la atención en las imágenes se obtienen generalizando el algoritmo húngaro utilizando diferentes funciones de pérdida. Esto permite codificar las características basadas en objetos y su alineación con la consulta independientemente de la disponibilidad de la concurrencia de diferentes objetos en el conjunto de entrenamiento. Validamos el rendimiento de nuestro enfoque en conjuntos de datos estándar de objeto único / múltiple, mostrando el rendimiento más avanzado en cada conjunto de datos SBIR. En tercer lugar, investigamos el problema de la recuperación de imágenes basadas en bocetos de disparo cero (ZS-SBIR), donde los bocetos humanos se utilizan como consultas para llevar a cabo la recuperación de fotos de categorías invisibles. Avanzamos de manera importante en las técnicas anteriores al proponer un nuevo escenario ZS-SBIR que representa un firme paso adelante en su aplicación práctica. El nuevo entorno reconoce de manera única dos desafíos importantes pero a menudo descuidados de la práctica ZS-SBIR, (i) la gran brecha de dominio entre el boceto aficionado y la foto, y (ii) la necesidad de avanzar hacia la recuperación a gran escala. Primero contribuimos a la comunidad con un nuevo conjunto de datos ZS-SBIR, QuickDraw -Extended, que consta de bocetos de $ 330,000 $ y fotos de $ 204,000 $ que abarcan 110 categorías. Los bocetos humanos aficionados altamente abstractos se obtienen a propósito para maximizar la brecha de dominio, en lugar de los incluidos en los conjuntos de datos existentes que a menudo pueden ser semi-fotorrealistas. Luego formulamos un marco ZS-SBIR para modelar conjuntamente bocetos y fotos en un espacio de incrustación común.
The deluge of visual content on the Internet – from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches/text and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sket-ches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model lear-ned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every SBIR dataset. Third, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of $330,000$ sketches and $204,000$ photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset.
Style APA, Harvard, Vancouver, ISO itp.
3

Bui, Tu. "Sketch based image retrieval on big visual data". Thesis, University of Surrey, 2019. http://epubs.surrey.ac.uk/850099/.

Pełny tekst źródła
Streszczenie:
The deluge of visual content on the Internet - from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-domain learning framework that maps both sketches and images into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique parameter sharing and training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, under two separate works we propose to disambiguate sketched queries by combining sketched shape with a secondary modality: SBIR with colour and with aesthetic context. The former enables querying with coloured line-art sketches. Colour and shape features are extracted locally using a modified version of gradient field orientation histogram (GF-HoG) before globally pooled using dictionary learning. Various colour-shape fusion strategies are explored, coupled with an efficient indexing scheme for fast retrieval performance. The latter supports querying using both a sketched shape accompanied by one or several images serving as an aesthetic constraint governing the visual style of search results. We propose to model structure and style separately dis-entangling one modality from the other; then learn structure-style fusion using a hierarchical triplet network. This method enables further studies beyond SBIR such as style blending, style analogy and retrieval with alternative-modal queries. Third, we explore mid-grain SBIR -- a novel field requiring retrieved images to match both category and key visual characteristics of the sketch without demanding fine-grain, instance-level matching of specific object instance. We study a semi-supervised approach that requires mainly class-labelled sketches and images plus a small number of instance-labelled sketch-image pairs. This approach involves aligning sketch and image embeddings before pooling them into clusters from which mid-grain similarity may be measured. Our learned model demonstrates not only intra-category discrimination (mid-grain) but also improved inter-category discrimination (coarse-grain) on a newly created MidGrain65c dataset.
Style APA, Harvard, Vancouver, ISO itp.
4

Kamvysselis, Manolis 1977, i Ovidiu 1975 Marina. "Imagina : a cognitive abstraction approach to sketch-based image retrieval". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/16724.

Pełny tekst źródła
Streszczenie:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (leaves 151-157).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
As digital media become more popular, corporations and individuals gather an increasingly large number of digital images. As a collection grows to more than a few hundred images, the need for search becomes crucial. This thesis is addressing the problem of retrieving from a small database a particular image previously seen by the user. This thesis combines current findings in cognitive science with the knowledge of previous image retrieval systems to present a novel approach to content based image retrieval and indexing. We focus on algorithms which abstract away information from images in the same terms that a viewer abstracts information from an image. The focus in Imagina is on the matching of regions, instead of the matching of global measures. Multiple representations, focusing on shape and color, are used for every region. The matches of individual regions are combined using a saliency metric that accounts for differences in the distributions of metrics. Region matching along with configuration determines the overall match between a query and an image.
by Manolis Kamvysselis and Ovidiu Marina.
S.B.and M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
5

Tseng, Kai-Yu, i 曾開瑜. "Sketch-based Image Retrieval on Mobile Devices Using Compact Hash Bits". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/86812733175523374451.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
資訊網路與多媒體研究所
100
With the advance of science and technology, touch panels in mobile devices has provided a good platform for mobile sketch search. Moreover, the request of real time application on mobile devices becomes increasingly urgent and most applications are based on large dataset so these dataset should be indexed for efficiency. However, most of previous sketch image retrieval system are usually provided on the server side and simply adopt an inverted index structure on image database, which is formidable to be operated in the limited memory of mobile devices independently. In this paper, we propose a novel approach to address these challenges. First, we effectively utilize distance transform (DT) features and their deformation formula to bridge the gap between manual sketches and natural images. Then these high-dimensional features are further projected to more compact binary hash bits, which can effectively reduce the memory usage and we will compare the performance with different sketch based image retrieval techniques. The experimental results show that our method achieves very competitive retrieval performance with other state of the arts approaches but only requires much less memory storage. Due to its low consumption of memory, the whole system can independently operate on the mobile devices.
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Ching-Hsuan, i 劉璟萱. "Exploiting Word and Visual Word Co-occurrence for Sketch-based Image Retrieval". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/20273244983524928076.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
資訊工程學研究所
103
As the increasing popularity of touch-screen devices, retrieving images by hand-drawn sketch has become a trend. Human sketch can easily express some complex user intention such as the object shape. However, sketches are sometimes ambiguous due to different drawing styles and inter-class object shape ambiguity. Although adding text queries as semantic information can help removing the ambiguity of sketch, it requires a huge amount of efforts to annotate text tags to all database images. We propose a method directly model the relationship between text and images by the co-occurrence relationship between words and visual words, which improves traditional sketch-based image retrieval (SBIR), provides a baseline performance and obtains more relevant results in the condition that all images in database do not have any text tag. Experimental results show that our method really can help SBIR to get better retrieval result since it indeed learned semantic meaning from the ``word-visual word" (W-VW) co-occurrence relationship.
Style APA, Harvard, Vancouver, ISO itp.
7

Dutta, Titir. "Generalizing Cross-domain Retrieval Algorithms". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5869.

Pełny tekst źródła
Streszczenie:
Cross-domain retrieval is an important research topic due to its wide range of applications in e-commerce, forensics etc. It addresses the data retrieval problem from a search set, when the query belongs to one domain, and the search database contains samples from some other domain. Several algorithms have been proposed for the same in recent literature to address this task. In this thesis, we address some of the challenges in cross-domain retrieval, specifically for the application of sketch-based image retrieval. Traditionally, cross-domain algorithms assume that both the training and test data belong to the same set of seen-classes, which is quite restrictive. Thus, such models can only be used to retrieve data from the two specific domains on which they have been trained on, and cannot generalize to new domains or new classes, during retrieval. But in real world, new object classes are continuously being discovered over time, thus it is necessary to design algorithms that can generalize to previously unseen classes. In addition, for a practically useful retrieval model, it will be good if the model can perform retrieval between any two different data domains, whether or not those domains are used for training. In our work, we observe a significant decrease in the performance of existing approaches in these generalized retrieval scenarios, when such simplified assumptions are removed. In this thesis, we aim to address these and related challenges, so as to make the cross-domain retrieval models better suited for real-life applications. We first consider a class-wise generalized protocol, where the query data during retrieval may belong to any unseen classes. Following the nomenclature in the classification problems, we refer to this as zero-shot cross-modal retrieval and propose an add-on ranking module to improve the performance of the existing cross-modal methods in literature. This work is applicable to different modalities (eg. text-image), in addition to different domains (eg. image and RGBD data). Next, we focus on developing an end-to-end framework, named StyleGuide, which addresses the task of sketch-based image retrieval, for such zero-shot retrieval condition. In addition, this thesis also explores the effects of class-imbalance in training data, which is a challenging aspect for designing any machine learning algorithm. The problem of data imbalance is inherently present in all real-world datasets and we show that it adversely affects the performance of existing sketch-based image retrieval approaches. A robust adaptive margin- based regularizer is proposed as a potential solution to handle this challenge. Also, a style- augmented SBIR system is proposed in this thesis, as an extended use-case for SBIR-problems. Finally, we introduce a novel protocol termed as Universal cross-domain retrieval (UCDR), which is an extension of the zero-shot cross-modal retrieval across generalized query domains. Here, the query may belong to an unseen domain, as well as an unseen class, thus further generalizing the retrieval model. A mix-up based class-neighbourhood aware network SnMpNet is proposed to address the same. Finally, we conclude the thesis summarizing all the research findings and discussing the future research directions.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Sketch-based Image retrieval"

1

Xia, Yu, Shuangbu Wang, Yanran Li, Lihua You, Xiaosong Yang i Jian Jun Zhang. "Fine-Grained Color Sketch-Based Image Retrieval". W Advances in Computer Graphics, 424–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22514-8_40.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Bhunia, Ayan Kumar, Aneeshan Sain, Parth Hiren Shah, Animesh Gupta, Pinaki Nath Chowdhury, Tao Xiang i Yi-Zhe Song. "Adaptive Fine-Grained Sketch-Based Image Retrieval". W Lecture Notes in Computer Science, 163–81. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19836-6_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sharath Kumar, Y. H., i N. Pavithra. "KD-Tree Approach in Sketch Based Image Retrieval". W Mining Intelligence and Knowledge Exploration, 247–58. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26832-3_24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Birari, Dipika, Dilendra Hiran i Vaibhav Narawade. "Survey on Sketch Based Image and Data Retrieval". W Lecture Notes in Electrical Engineering, 285–90. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8715-9_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhang, Xiao, i Xuejin Chen. "Robust Sketch-Based Image Retrieval by Saliency Detection". W MultiMedia Modeling, 515–26. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27671-7_43.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Shi, Yufeng, Xinge You, Wenjie Wang, Feng Zheng, Qinmu Peng i Shuo Wang. "Retrieval by Classification: Discriminative Binary Embedding for Sketch-Based Image Retrieval". W Pattern Recognition and Computer Vision, 15–26. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31726-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bozas, Konstantinos, i Ebroul Izquierdo. "Large Scale Sketch Based Image Retrieval Using Patch Hashing". W Advances in Visual Computing, 210–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33179-4_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Parui, Sarthak, i Anurag Mittal. "Similarity-Invariant Sketch-Based Image Retrieval in Large Databases". W Computer Vision – ECCV 2014, 398–414. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10599-4_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Tianqi, Liyan Zhang i Jinhui Tang. "Sketch-Based Image Retrieval with Multiple Binary HoG Descriptor". W Communications in Computer and Information Science, 32–42. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wu, Xinhui, i Shuangjiu Xiao. "Sketch-Based Image Retrieval via Compact Binary Codes Learning". W Neural Information Processing, 294–306. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04224-0_25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Sketch-based Image retrieval"

1

Wang, Shu, i Zhenjiang Miao. "Sketch-based image retrieval using sketch tokens". W 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2015. http://dx.doi.org/10.1109/acpr.2015.7486533.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Zhipeng, Hao Wang, Jiexi Yan, Aming Wu i Cheng Deng. "Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/158.

Pełny tekst źródła
Streszczenie:
Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) is a novel cross-modal retrieval task, where abstract sketches are used as queries to retrieve natural images under zero-shot scenario. Most existing methods regard ZS-SBIR as a traditional classification problem and employ a cross-entropy or triplet-based loss to achieve retrieval, which neglect the problems of the domain gap between sketches and natural images and the large intra-class diversity in sketches. Toward this end, we propose a novel Domain-Smoothing Network (DSN) for ZS-SBIR. Specifically, a cross-modal contrastive method is proposed to learn generalized representations to smooth the domain gap by mining relations with additional augmented samples. Furthermore, a category-specific memory bank with sketch features is explored to reduce intra-class diversity in the sketch domain. Extensive experiments demonstrate that our approach notably outperforms the state-of-the-art methods in both Sketchy and TU-Berlin datasets.
Style APA, Harvard, Vancouver, ISO itp.
3

Li, Yunfei, i Xiaojing Liu. "Sketch Based Thangka Image Retrieval". W 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). IEEE, 2021. http://dx.doi.org/10.1109/iaeac50856.2021.9390657.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kondo, Shin-ichiro, Masahiro Toyoura i Xiaoyang Mao. "Sketch based skirt image retrieval". W the 4th Joint Symposium. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2630407.2630410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Jiang, Tianbi, Gui-Song Xia i Qikai Lu. "Sketch-based aerial image retrieval". W 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. http://dx.doi.org/10.1109/icip.2017.8296971.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Matsui, Yusuke, Kiyoharu Aizawa i Yushi Jing. "Sketch2Manga: Sketch-based manga retrieval". W 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025626.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chaudhuri, Abhra, Ayan Kumar Bhunia, Yi-Zhe Song i Anjan Dutta. "Data-Free Sketch-Based Image Retrieval". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01163.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Liu, Li, Fumin Shen, Yuming Shen, Xianglong Liu i Ling Shao. "Deep Sketch Hashing: Fast Free-Hand Sketch-Based Image Retrieval". W 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Seddati, Omar, Stéphane Dupont i Saïd Mahmoudi. "Quadruplet Networks for Sketch-Based Image Retrieval". W ICMR '17: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078971.3078985.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Xiao, Changcheng, Changhu Wang, Liqing Zhang i Lei Zhang. "Sketch-based Image Retrieval via Shape Words". W ICMR '15: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2671188.2749360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii