Dissertations / Theses on the topic 'Image retrieval'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Image retrieval.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ahmad, Fauzi Mohammad Faizal. "Content-based image retrieval of museum images." Thesis, University of Southampton, 2004. https://eprints.soton.ac.uk/261546/.
Full textGibson, Stuart Edward. "Sieves for image retrieval." Thesis, University of East Anglia, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405401.
Full textNahar, Vikas. "Content based image retrieval for bio-medical images." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Nahar_09007dcc80721e0b.pdf.
Full textVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
Saavedra, Rondo José Manuel. "Image Descriptions for Sketch Based Image Retrieval." Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/112670.
Full textDebido al uso masivo de Internet y a la proliferación de dispositivos capaces de generar información multimedia, la búsqueda y recuperación de imágenes basada en contenido se han convertido en áreas de investigación activas en ciencias de la computación. Sin embargo, la aplicación de búsqueda por contenido requiere una imagen de ejemplo como consulta, lo cual muchas veces puede ser un problema serio, que imposibilite la usabilidad de la aplicación. En efecto, los usuarios comúnmente hacen uso de un buscador de imágenes porque no cuentan con la imagen deseada. En este sentido, un modo alternativo de expresar lo que el usuario intenta buscar es mediante un dibujo a mano compuesto, simplemente, de trazos, sketch, lo que onduce a la búsqueda por imágenes basada en sketches. Hacer este tipo de consultas es soportado, además, por el hecho de haberse incrementado la accesibilidad a dispositivos táctiles, facilitando realizar consultas de este tipo. En este trabajo, se proponen dos métodos aplicados a la recuperación de imágenes basada en sketches. El primero es un método global que calcula un histograma de orientaciones usando gradientes cuadrados. Esta propuesta exhibe un comportamiento sobresaliente con respecto a otros métodos globales. En la actualidad, no existen métodos que aprovechen la principal característica de los sketches, la información estructural. Los sketches carecen de color y textura y representan principalmente la estructura de los objetos que se quiere buscar. En este sentido, se propone un segundo método basado en la representación estructural de las imágenes mediante un conjunto de formas primitivas que se denominan keyshapes. Los resultados de nuestra propuesta han sido comparados con resultados de métodos actuales, mostrando un incremento significativo en la efectividad de la recuperación. Además, puesto que nuestra propuesta basada en keyshapes explota una característica novedosa, es posible combinarla con otras técnicas para incrementar la efectividad de los resultados. Así, en este trabajo se ha evaluado la combinación del método propuesto con el método propuesto por Eitz et al., basado en Bag of Words, logrando un aumento de la efectividad de casi 22%. Finalmente, con el objetivo de mostrar el potencial de nuestra propuesta, se muestran dos aplicaciones. La primera está orientada al contexto de recuperación de modelos 3D usando un dibujo a mano como consulta. En esta caso, nuestros resultados muestran competitividad con el estado del arte. La segunda aplicación explota la idea de buscar objetos basada en la estructura para mejorar el proceso de segmentación. En particular, mostramos una aplicación de segmentación de manos en ambientes semi-controlados.
Ingratta, Donato. "Texture image retrieval using fuzzy image subdivision." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0012/MQ52743.pdf.
Full textRen, Feng Hui. "Multi-image query content-based image retrieval." Access electronically, 2006. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20070103.143624/index.html.
Full textNanayakkara, Wasam Uluwitige Dinesha Chathurani. "Content based image retrieval with image signatures." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/104286/1/Dinesha_Chathurani_Nanayakkara_Thesis.pdf.
Full textLarsson, Jimmy. "Taxonomy Based Image Retrieval : Taxonomy Based Image Retrieval using Data from Multiple Sources." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180574.
Full textMed den mängd bilder som nu finns tillgänglig på Internet, hur kan vi fortfarande hitta det vi letar efter? Denna uppsats försöker avgöra hur mycket bildprecision och bildåterkallning kan öka med hjälp av appliceringen av en ordtaxonomi på traditionell Text-Based Image Search och Content-Based Image Search. Genom att applicera en ordtaxonomi på olika datakällor kan ett starkt ordfilter samt en modul som förlänger ordlistor skapas och testas. Resultaten pekar på att beroende på implementationen så kan antingen precisionen eller återkallningen förbättras. Genom att använda en liknande metod i ett verkligt scenario är det därför möjligt att flytta bilder med hög precision längre fram i resultatlistan och samtidigt behålla hög återkallning, och därmed öka den upplevda relevansen i bildsök.
U, Leong Hou. "Web image clustering and retrieval." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1445902.
Full textManja, Philip. "Image Retrieval within Augmented Reality." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229922.
Full textThe present work investigates the potential of augmented reality for improving the image retrieval process. Design and usability challenges were identified for both fields of research in order to formulate design goals for the development of concepts. A taxonomy for image retrieval within augmented reality was elaborated based on research work and used to structure related work and basic ideas for interaction. Based on the taxonomy, application scenarios were formulated as further requirements for concepts. Using the basic interaction ideas and the requirements, two comprehensive concepts for image retrieval within augmented reality were elaborated. One of the concepts was implemented using a Microsoft HoloLens and evaluated in a user study. The study showed that the concept was rated generally positive by the users and provided insight in different spatial behavior and search strategies when practicing image retrieval in augmented reality
Zhang, Dengsheng 1963. "Image retrieval based on shape." Monash University, School of Computing and Information Technology, 2002. http://arrow.monash.edu.au/hdl/1959.1/8688.
Full textTorres, Jose Roberto Perez. "Image retrieval using semantic trees." Thesis, University of East Anglia, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493013.
Full textMohamed, Aamer S. S. "From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.
Full textMohamed, Aamer Saleh Sahel. "From content-based to semantic image retrieval : low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.
Full textDey, Sounak. "Mapping between Images and Conceptual Spaces: Sketch-based Image Retrieval." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671082.
Full textEl diluvio de contenido visual en Internet, desde contenido generado por el usuario hasta colecciones de imágenes comerciales, motiva nuevos métodos intuitivos para buscar contenido de imágenes digitales: ¿cómo podemos encontrar ciertas imágenes en una base de datos de millones? La recuperación de imágenes basada en bocetos (SBIR) es un tema de investigación emergente en el que se puede usar un dibujo a mano libre para consultar visualmente imágenes fotográficas. SBIR está alineado con las tendencias emergentes para el consumo de contenido visual en dispositivos móviles con pantalla táctil, para los cuales las interacciones gestuales como el boceto son una alternativa natural a la entrada de texto. Esta tesis presenta varias contribuciones a la literatura de SBIR. En primer lugar, proponemos un marco de aprendizaje multimodal que mapea tanto los bocetos como el texto en un espacio de incrustación conjunto invariante al estilo representativo, al tiempo que conserva la semántica. La incrustación resultante permite la comparación directa y la búsqueda entre bocetos / texto e imágenes y se basa en una red neuronal convolucional de múltiples ramas (CNN) entrenada utilizando esquemas de entrenamiento únicos. La incrustación profundamente aprendida muestra un rendimiento de recuperación de última generación en varios puntos de referencia SBIR. En segundo lugar, proponemos un enfoque para la recuperación de imágenes multimodales en imágenes con etiquetas múltiples. Una arquitectura de red profunda multimodal está formulada para modelar conjuntamente bocetos y texto como modalidades de consulta de entrada en un espacio de incrustación común, que luego se alinea aún más con el espacio de características de la imagen. Nuestra arquitectura también se basa en una detección de objetos sobresalientes a través de un modelo de atención visual supervisado basado en LSTM aprendido de las características convolucionales. Tanto la alineación entre las consultas y la imagen como la supervisión de la atención en las imágenes se obtienen generalizando el algoritmo húngaro utilizando diferentes funciones de pérdida. Esto permite codificar las características basadas en objetos y su alineación con la consulta independientemente de la disponibilidad de la concurrencia de diferentes objetos en el conjunto de entrenamiento. Validamos el rendimiento de nuestro enfoque en conjuntos de datos estándar de objeto único / múltiple, mostrando el rendimiento más avanzado en cada conjunto de datos SBIR. En tercer lugar, investigamos el problema de la recuperación de imágenes basadas en bocetos de disparo cero (ZS-SBIR), donde los bocetos humanos se utilizan como consultas para llevar a cabo la recuperación de fotos de categorías invisibles. Avanzamos de manera importante en las técnicas anteriores al proponer un nuevo escenario ZS-SBIR que representa un firme paso adelante en su aplicación práctica. El nuevo entorno reconoce de manera única dos desafíos importantes pero a menudo descuidados de la práctica ZS-SBIR, (i) la gran brecha de dominio entre el boceto aficionado y la foto, y (ii) la necesidad de avanzar hacia la recuperación a gran escala. Primero contribuimos a la comunidad con un nuevo conjunto de datos ZS-SBIR, QuickDraw -Extended, que consta de bocetos de $ 330,000 $ y fotos de $ 204,000 $ que abarcan 110 categorías. Los bocetos humanos aficionados altamente abstractos se obtienen a propósito para maximizar la brecha de dominio, en lugar de los incluidos en los conjuntos de datos existentes que a menudo pueden ser semi-fotorrealistas. Luego formulamos un marco ZS-SBIR para modelar conjuntamente bocetos y fotos en un espacio de incrustación común.
The deluge of visual content on the Internet – from user-generated content to commercial image collections - motivates intuitive new methods for searching digital image content: how can we find certain images in a database of millions? Sketch-based image retrieval (SBIR) is an emerging research topic in which a free-hand drawing can be used to visually query photographic images. SBIR is aligned to emerging trends for visual content consumption on mobile touch-screen based devices, for which gestural interactions such as sketch are a natural alternative to textual input. This thesis presents several contributions to the literature of SBIR. First, we propose a cross-modal learning framework that maps both sketches and text into a joint embedding space invariant to depictive style, while preserving semantics. The resulting embedding enables direct comparison and search between sketches/text and images and is based upon a multi-branch convolutional neural network (CNN) trained using unique training schemes. The deeply learned embedding is shown to yield state-of-art retrieval performance on several SBIR benchmarks. Second, we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sket-ches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model lear-ned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every SBIR dataset. Third, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of $330,000$ sketches and $204,000$ photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset.
Palma, Alberto de Jesus Pastrana. "Feature Extraction, Correspondence Regions and Image Retrieval using Structured Images." Thesis, University of East Anglia, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502556.
Full textElliott, Desmond. "Structured representation of images for language generation and image retrieval." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10524.
Full textOzcanli-ozbay, Ozge Can. "Image Retrieval Based On Region Classification." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605082/index.pdf.
Full textYan, Bin. "Web Recommendation System with Image Retrieval." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156430.
Full textSu, Hongjiang. "Shoeprint image noise reduction and retrieval." Thesis, Queen's University Belfast, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.486207.
Full textAwg, Iskandar Dayang Nurfatimah, and dnfaiz@fit unimas my. "Image Retrieval using Automatic Region Tagging." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090302.155704.
Full textLi, Fang. "Content-based retrieval for image databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0019/MQ48276.pdf.
Full textKochakornjarupong, Paijit. "Trademark image retrieval by local features." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2677/.
Full textStathopoulos, Vassilios. "Generative probabilistic models for image retrieval." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3360/.
Full textShao, Ling. "Invariant salient regions based image retrieval." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497094.
Full textRickman, Richard Matthew. "Image database retrieval using neural networks." Thesis, Brunel University, 1993. http://bura.brunel.ac.uk/handle/2438/4319.
Full textLai, Ting-Sheng. "CHROMA : a photographic image retrieval system." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301314.
Full textKesorn, Kraisak. "Multi modal multi-semantic image retrieval." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/438.
Full textTieu, Kinh H. (Kinh Han) 1976. "Boosting sparse representations for image retrieval." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86431.
Full textMensink, Thomas. "Learning Image Classification and Retrieval Models." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM113/document.
Full textWe are currently experiencing an exceptional growth of visual data, for example, millions of photos are shared daily on social-networks. Image understanding methods aim to facilitate access to this visual data in a semantically meaningful manner. In this dissertation, we define several detailed goals which are of interest for the image understanding tasks of image classification and retrieval, which we address in three main chapters. First, we aim to exploit the multi-modal nature of many databases, wherein documents consists of images with a form of textual description. In order to do so we define similarities between the visual content of one document and the textual description of another document. These similarities are computed in two steps, first we find the visually similar neighbors in the multi-modal database, and then use the textual descriptions of these neighbors to define a similarity to the textual description of any document. Second, we introduce a series of structured image classification models, which explicitly encode pairwise label interactions. These models are more expressive than independent label predictors, and lead to more accurate predictions. Especially in an interactive prediction scenario where a user provides the value of some of the image labels. Such an interactive scenario offers an interesting trade-off between accuracy and manual labeling effort. We explore structured models for multi-label image classification, for attribute-based image classification, and for optimizing for specific ranking measures. Finally, we explore k-nearest neighbors and nearest-class mean classifiers for large-scale image classification. We propose efficient metric learning methods to improve classification performance, and use these methods to learn on a data set of more than one million training images from one thousand classes. Since both classification methods allow for the incorporation of classes not seen during training at near-zero cost, we study their generalization performances. We show that the nearest-class mean classification method can generalize from one thousand to ten thousand classes at negligible cost, and still perform competitively with the state-of-the-art
Hare, Jonathon S. "Saliency for image description and retrieval." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262437/.
Full textRodhetbhai, Wasara. "Preprocessing for content-based image retrieval." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/66393/.
Full textTorres, Fernandez Sara. "Designing Variational Autoencoders for Image Retrieval." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234931.
Full textDen explosiva tillväxten av förvärvade visuella data på Internet har ökat in- tresse för att utveckla avancerade bildhämtningssystem. Huvudproblemet är beroende av sökandet efter en specifik bild bland stora samlingar eller databaser, och det här problemet delas av många användare från olika domäner, som brottsförebyggande, medicin eller journalistik. För att hantera denna situation fokuserar detta projekt på Variations autokodare för bildhämtning. Variations autokodare (VAE) är neurala nätverk som används för oövervakat lärande av komplicerade fördelningar genom att använda stokastisk variationsinferens. Traditionellt har de använts för bildrekonstruktion eller generation. Målet med denna avhandling består emellertid i att testa olika autokodare för klassificering och hämtning av olika bilder från en databas. Denna avhandling undersöker flera metoder för att uppnå bästa prestanda för bildåtervinning. Vi använder de latenta variablerna i flaskhalsstadiet i VAE som de lärda funktionerna för bildhämtningsuppgiften. För att uppnå snabb hämtning fokuserar vi på diskreta latenta funktioner. Specifikt undersöks sigmoidfunktionen för binärisering och Gumbel-Softmax-metoden för diskretisering. Testerna visar att med hjälp av medelvärdet av latenta variabler som funktioner ger generellt bättre prestanda än deras stokastiska representationer. Vidare visar diskreta funktioner som använder Gumbel-Softmax-metoden i det latenta utrymmet bra prestanda. Det ligger nära det maximala prestanda somuppnås genom att använda ett kontinuerligt latent utrymme.
Ozendi, Mustafa. "Viewpoint Independent Image Classification and Retrieval." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1285011830.
Full textWong, Chun Fan. "Automatic semantic image annotation and retrieval." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1188.
Full textJanicki, James H. "Retrieval from an image knowledge base /." Online version of thesis, 1993. http://hdl.handle.net/1850/12196.
Full textAlaei, Fahimeh. "Texture Feature-based Document Image Retrieval." Thesis, Griffith University, 2019. http://hdl.handle.net/10072/385939.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Zhu, Bin, and Hsinchun Chen. "Validating a Geographic Image Retrieval System." Wiley Periodicals, Inc, 2000. http://hdl.handle.net/10150/105934.
Full textThis paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate the performance of this image retrieval system against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent a geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms.
Reddy, Vishwanath Reddy Keshi, and Praveen Bandikolla. "Image Retrieval Using a Combination of Keywords and Image Features." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3372.
Full textkvishu223@gmail.com, pravs72@yahoo.co.in.
姚景岳. "Pistol image Retrieval." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/88x78j.
Full textTieu, Kinh, and Paul Viola. "Boosting Image Database Retrieval." 1999. http://hdl.handle.net/1721.1/5927.
Full textCheng, Hsiang-Fen, and 鄭翔芬. "Image Clustering and Retrieval." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/59657803465700576860.
Full text國立臺灣科技大學
資訊管理系
97
Nowadays, due to the rapid growth of World Wide Web (WWW), a large amount of multimedia data is generated on Internet, which is usually compressed in JPEG format in order to transmit and store efficiently. However, current approaches for content based image retrieval almost focus on uncompressed images. They need to decode images to spatial domain first, which would consume a lot of computation and search time. Therefore, in order to shorten the retrieval time, directly processing the feature extraction and image retrieval from compressed domain can save a lot of time. In addition, the value obtaining by only partial decoding could also represent the image’s characteristic explicitly. Nevertheless, most of approaches in this compressed domain still select a lot of coefficients to represent the image’s features or process those coefficients in additional steps for obtaining image features. However, in this way, the search time will increase dramatically with the size of the image database. Hence, the purpose of this thesis is to extract only a few representative features from the compressed domain, and effectively use these features in image retrieval system such that the images requested by users can be retrieved efficiently. This thesis proposes an efficient image clustering and retrieval approaches. They can improve search time and effectively retrieve the similar images. Using bisecting K-means algorithm, the images from an compressed image database are separated according to the image’s content first, so the retrieval approach is not necessary to search all images in the image database in later processes. Moreover, DC (Direct Current) coefficients are directly extracted from DCT (Discrete Cosine Transformation) domain without fully decoding the compressed images. Therefore, the time of similarity measurement is decreased, and the features extracted from the image database are easy to be managed. In addition, using DC features on the clustering stage and similarity computing stage, the proposed approach can efficiently retrieve the images which match the user’s demand. Experimental results reveal that the proposed approach has highly efficient response time and improves the performance of image retrieval result.
傅德瑋. "Region-Based Image Retrieval." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/09033606120251965326.
Full text國立臺灣大學
資訊工程學研究所
91
A content-based image region retrieval system (CBRR) can retrieve the desired image regions for a user from a large database, based on image region content. There are some difference between CBRR and traditional content-based image retrieval system (CBIR). The CBIR is focused on global image similarity from some classified category images. However the CBRR is focused on local image region similarity. For the sake of obtaining the image region, we must segment each image into regions, and we apply the watershed segmentation in this thesis.
Wu, Jui-Chien, and 吳瑞千. "Template-based Image Retrieval." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/64925670330552846623.
Full text國立清華大學
資訊工程學系
86
A template-based image retrieval system is proposed. The user can specify a small template and let the system to find which image(s) in the database contains this template. The system stores the projections of edges as features and uses different similarity measures such as sum of absolute difference, variance, and elastic distance to deal with templates of differentdistortions. Experiments by a distributed implementation show that the proposedmethod can retrieve the desired image(s) in minutes from a database of thousands of images and tolerate minor distortions.
Wu, Wei-Liang, and 吳韋良. "Semantics-based Image Retrieval." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/52979211332423742224.
Full text國立交通大學
資訊科學與工程研究所
105
Image search is an important technique in multimedia applications, in which image retrieval is a common technology for image search. Given a query image, the goal is to retrieve relevant images from an image database. Most previous research studies rely on important features extracted from images to calculate the similarity of two images. One of the drawbacks of this approach is that it focuses on the image-specific features without considering semantics of the images. Therefore, the images that are semantically related to query images but highly differ in image features will not be the candidates of the retrieval. Additionally, many photo websites allow users to provide descriptions or tags for the photos they uploaded, inspiring us to use the image itself and its description to propose a semantics-based image retrieval framework by using machine learning techniques. The key idea behind the proposed method is to extract important objects from the query image, and classified the extracted objects as predefined labels for this query image. Then, we project the labels and the descriptions in the data set to the same latent space by using deep neural networks, and calculate semantic similarity in the latent space. This thesis conducts experiments on Flickr data set and evaluates the results with the average irrelevant image number of the searching results. The experimental results indicate that when only using an image as query, the retrieved results are much acceptable than other methods' results.
Jueng, Cheng-Yuan, and 鄭承淵. "Texture Image Retrieval System." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/rv59ng.
Full text崑山科技大學
電腦與通訊研究所
102
When the designers want to look for leathers or cloths, the most common way to search the materials is to browse the vendor catalogs. Due to the advance of web technologies, they can use the internet search engines to browse the image database giving either the keywords or pictures of the samples. Sometimes, what the designers thinking in mind is just a style or a concept which is difficult to be described literally and to match the annotation of images in the image database. Using pictures of the samples for image retrieval, or called content-based image retrieval, might be a better way to solve the problems caused by poor literal description. Current public search engines which support content-based image retrieval functions, such as google or Tineye, retrieve the images mainly based on the similarity of color histograms between the user upload query image and each image in the database. However, for the fabric materials, those with the same texture might appear with different colors. Therefore, merely color feature is not suitable for comparing different fabric materials. In this study, we develop a content-based image retrieval system for leather and fabric materials based on not only the feature of color histogram but also the features derived LBP (local binary pattern), FAST (Features from Accelerated Segment Test) and Haar Wavelet. These features are able to discriminate not only the textures but also the pattern styles. 45 categories of fabrics with various textures, colors and patterns are test in our experiments. Top 9 retrieved images are considered as the candidates. When the search scope is set to ten categories, our system can reach 87.37% retrieval rate. When all 45 categories are test, our system can reach 41.27% retrieval rate. In the future, we will continue to improve the algorithms to enhance the overall retrieval rate.
Kakde, Bhavana. "Content-Based Image Retrieval." Thesis, 2018. http://ethesis.nitrkl.ac.in/9891/1/2018_MT_216EC6250_B_Kakde_Content.pdf.
Full textTan, Hsiao-lan, and 譚曉蘭. "Similarity Retrieval for Rotated Images in Image Database." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/13849315357878412082.
Full text淡江大學
資訊管理學系
87
The spatial relationship between objects is one of the important characteristics of an image. In an image database system, spatial reasoning and similarity retrieval are often performed based on the spatial relationships. Hence, how to use a spatial data structure to represent the spatial relationships within an image has been discussed in the related research. The 2D string and all the strings extended from it have been used as the data structures to represent the spatial relationships between objects. However, these strings are used in the Cartesian coordinates systems and the query image must have the same orientation as that of the images in the image database. Consequently, a database image will not be retrieved by a query image although they have the same spatial relationships between objects but different orientations. RS-string has been proposed to try to solve this problem. However, it is based on the polar coordinates system. The description of the spatial relationships between objects is different from that in the Cartesian coordinates system. In this paper, we propose and approach to retrieve the database image when the query image provided by the user is in the different orientation according to the spatial relationships between objects in the Cartesian coordinates system. In the proposed approach, the change of the spatial relationship between each pair of objects in the query image rotated from 0 to 360 is recorded in the relation table. The table is then compared to the spatial relationships between objects of the images in the image database. For an image in database, if it has the same spatial relationship between every pair of objects as the query image when the query image is rotated to an orientation or a range of orientation, it is a similar image. Hence, the similarity retrieval can be performed in the Cartesian coordinates when the images are in different orientation. Finally, a prototype is presented.
Chen, Bo-Rong, and 陳柏融. "Content-Based Image Retrieval System for Real Images." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/77260057315471754913.
Full text國立雲林科技大學
資訊工程系
104
With the rapid progress of network technologies and multimedia data, information retrieval techniques gradually become content-based, and not text-based yet. In this paper, we propose a content-based image retrieval system to query similar images in a real image database. First, we employ segmentation and main object detection to separate the main object from an image. Then, we extract MPEG-7 features from the object and select relevant features using the SAHS algorithm. Next, two approaches “one-against-all” and “one-against-one” are proposed to build the classifiers based on SVM. To further reduce indexing complexity, K-means clustering is used to generate MPEG-7 signatures. Thus, we combine the classes predicted by the classifiers and the results based on the MPEG-7 signatures, and find out the similar images to a query image. Finally, the experimental results show that our method is feasible in image searching from the real image database and more effective than the other methods.
Chang, Yih-Cheng, and 張亦塵. "Image Sense Disambiguation in Web Image Retrieval." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/07023748217771211093.
Full text國立臺灣大學
資訊工程學研究所
96
In these few years, images in the web have explosively increased. Image retrieval for web images becomes more and more important. Image sense disambiguation/discrimination (ISD) is a task to disambiguate/discriminate image senses of retrieved web images. This technology can be used to improve the performance of web image retrieval or be applied in image annotation or object recognition tasks to help collecting training samples. ISD is a new task not being well studied but may become important in the future. In this thesis, we analyze and discuss ISD problem. We propose a method to find senses of web images. There may be many senses in the web are not be included in the dictionary. For each sense, we collect sample images and pages without human annotation. Unlike previous approaches that use clustering methods in ISD, we use classifying method instead. Four kinds of classifiers and a merge method are proposed in this thesis. The steps of our methods are evaluated and discussed and in the end of this thesis we will summarize our work and discuss some interesting future works.