Dissertations / Theses on the topic 'Fast retrieval'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 32 dissertations / theses for your research on the topic 'Fast retrieval.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pesavento, Marius. "Fast algorithms for multidimensional harmonic retrieval." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975415328.
Full textCao, Hui, Noboru Ohnishi, Yoshinori Takeuchi, Tetsuya Matsumoto, and Hiroaki Kudo. "FAST HUMAN POSE RETRIEVAL USING APPROXIMATE CHAMFER DISTANCE." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10437.
Full textPerry, S. T. "Fast interactive object delineation in images for content based retrieval and navigation." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286748.
Full textKuan, Joseph. "Image texture analysis and fast similarity search for content based retrieval and navigation." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287321.
Full textLardeux, Florian. "Robust Modelling and Efficient Recognition of Quasi-Flat Objects — Application to Ancient Coins." Thesis, La Rochelle, 2022. http://www.theses.fr/2022LAROS002.
Full textQuasi-flat objects are obtained from a matrix which defines specific features observable in their engraving. Some examples of these are dry stamps, amphora stamps or ancient coins. Quasi-flat objects are subsequently understood as very flat shapes onto which a characteristic relief is inscribed. Recognizing such objects is not an easy feat as many barriers come into play. The relief of quasi-flat objects is prone to non-rigid deformations and the illumination conditions influence the perception of the object’s relief. Furthermore, these items may have undergone various deteriorations, leading to the occlusion of some parts of their relief. In this thesis, we tackle the issue of recognizing quasi-flat objects. This work is articulated around three major axes. The first one aims at creating a model to represent the objects both by highlighting their main characteristics and taking into account the afore mentioned barriers. To this aim, the concept of multi-light energy map is introduced. The second and third axes introduce strategies for the recognition. On the one hand, we propose the use of contours as main features. Contours are described via a signature model from which specific descriptors are calculated. In order to store, retrieve and match those features, a data structure based on associative arrays, the LACS system, is introduced, which enables a fast retrieval of similar contours. On the other hand, the use of textures is investigated. The scope here is centered on the use of specific 2D regions and their description in order to perform the recognition. A similar angle is taken to store and retrieve the information as a similar, yet a more complex data structure is introduced
au, n. jackson@murdoch edu, and Natalie Deanne Jackson. "Simple arithmetic processing : fact retrieval mechanisms and the influence of individual difference, surface from, problem type and split on processing." Murdoch University, 2006. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070717.114439.
Full textJackson, Natalie Deanne. "Simple arithmetic processing: fact retrieval mechanisms and the influence of individual difference, surface form, problem type and split on processing." Jackson, Natalie Deanne (2007) Simple arithmetic processing: fact retrieval mechanisms and the influence of individual difference, surface form, problem type and split on processing. PhD thesis, Murdoch University, 2007. http://researchrepository.murdoch.edu.au/108/.
Full textPinheiro, Josiane Melchiori. "A influência das folksonomias na eficiência da fase inicial de modelagem conceitual." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2831.
Full textEste estudo examina a hipótese que usar folksonomias induzidas dos sistemas de tagging colaborativo em modelagem conceitual deve reduzir o número de divergências entre os atores envolvidos no processo quando eles elicitam termos para serem usados no modelo, usando-se como baseline os termos extraídos de páginas Web baseados na frequência de termos. Usa como medida de eficiência o número de divergências, pois quanto menor o número de divergências, menor o tempo e o esforço necessários para criar o modelo conceitual. Descreve os experimentos controlados de modelagem conceitual que foram realizados com grupos experimentais que receberam a folksonomia e com grupos de controle que receberam termos extraídos de páginas Web. Os resultados descritos mostram que grupos experimentais e de controle obtiveram números similares de divergências. Outras medidas de eficiências, assim como o reuso dos termos nos artefatos da modelagem e a facilidade percebida ao realizar a tarefa de modelagem confirmaram os resultados obtidos pelo número de divergências, com uma eficiência ligeiramente maior entre os grupos experimentais.
This study examines the hypothesis that using folksonomies induced from collaborative tagging systems in conceptual modeling should reduce the number of divergences between actors when they elicit terms to be used in a model, using as baseline terms extracted from webpages based on term frequency. It uses as efficiency measure the number of divergences, because the fewer the divergences, the less time and effort required to create a conceptual model. It describes the controlled conceptual modeling experiments that were performed using experimental groups that received a folksonomy and control groups that received terms extracted from webpages. The results show that the experimental and control groups obtained similar numbers of divergences. Other efficiency measures, such as reuse of terms in the phases of conceptual modeling and perceived ease of performing the modeling task, confirmed the results obtained by the number of divergences, with slightly greater efficiency among the experimental groups.
Ho, Chia-Lin, and 何佳霖. "Compression and Fast Retrieval for Digital Waveform." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/73261928713281426620.
Full text國立臺灣大學
資訊工程學研究所
90
Among VLSI circuit design, functional verification has become an important part due to rapid extension of circuits functionalities in many consumer and industry products. During simulation for digital circuits, the waveforms are stored on disk for future investigation and will finally fill up huge amounts of disk space. Beside of disk consumption, browsing the waveform becomes difficult because the required data is distributed over a large file space. Hence, We developed a set of algorithms and techniques that can be used to compress digital waveform, and we also define a new waveform data format that provides random access to improve the retrievable speed. Experimental result show that the retrievable time can be increased above 100 times and we could almost achieve 10%~35% compression ration comparing to the size of traditional VCD format waveform.
Shieh, Wann-Yun, and 謝萬雲. "Fast Information Retrieval in Incremental Web Databases." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/92235606317310550552.
Full text國立交通大學
資訊工程系
92
This dissertation proposes methodologies to (1) speedup information retrieval, (2) perform most-correlated-document-first retrieval, and (3) perform incremental index-update, for dynamically growing Web databases. This dissertation focuses on the indexing structure refinements by dealing with a most-widely-used one, called the inverted file. First, to speedup information retrieval, the dissertation deals with the inverted file by index compression, query-result caching. The objective is to minimize the query response time for database scale and current user behavior. Second, to perform most-correlated-document-first retrieval, the dissertation deals with the inverted file by tree-based index structuring. The objective is to retrieve the most correlated documents for user queries as soon as possible. Finally, to provide incremental index-update, the dissertation deals with the inverted file by spare-space allocation. The objective is to best guarantee that the index has sufficient reserved space to amortize update costs, and also to keep high space efficiency. Research topics in the dissertation are (1) Inverted file compression through document identifier reassignment The first topic is to propose a document identifier reassignment algorithm to compress the ever-increasing inverted files. Conventionally, the d-gap technique is used in the inverted file compression by replacing document identifiers with usually much smaller gap values. The objective of this topic is to reassign document identifiers, based on the document similarity evaluation, to smoothen and reduce the gap values in an inverted file. A Web database can take benefits from it in terms of the saved storage space, and fast file look-up time. (2) Inverted file caching for fast information retrieval The second topic is to propose an inverted file caching mechanism to exploit the locality of user queries in a Web database. In this mechanism, we enhance the indexing speed by a linked-list-based probing process, and enhance memory efficiency by a chunk-based space management. A Web database can take benefits from it in terms of the fast popular data’s response. (3) Tree-based inverted file structuring for most-correlated-document-first retrieval The third topic is to propose an n-key-heap posting-tree structure to preserve the identifier numerical orders and ranking information simultaneously in an index file. The objective of this topic is to reconstruct an inverted file to store important and most-correlated data efficiently such that they can be retrieved without time-consuming ranking or sorting. A Web database can take benefits from it in terms of the fast most-correlated-data’s response. (4) Statistics-based spare-space allocation for incremental inverted file updates The fourth topic is to propose a statistics-based approach to allocate the free space in an inverted file for future updates. The approach estimates the space requirements for an inverted file by collecting only a little most-recent statistical data. The objective is to incrementally update an inverted file without complex file reorganization or expensive free-space management, as the database expands. A Web database can take benefits from it in terms of the fast index updates. The results of this dissertation include: (1)For inverted file compression, the proposed approach improves the compression rate by 18%, and improves the query response time by 15% on average. (2)For inverted file caching, the proposed approach takes only about 7% additional space costs to outperform the conventional caching mechanisms by 20% indexing speed improvements on average. (3)For tree-based inverted file structuring, the time to retrieve the most-correlated documents is improved by 8%~45%, compared with the conventional link-list-based index structure. (4)For incremental inverted file updates, the proposed approach outperforms the conventional approaches by 16% space utilization improvements, and 15% index-updating speed improvements on average.
"Fast algorithms for sequence data searching." 1997. http://library.cuhk.edu.hk/record=b5889114.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (leaves 71-76).
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Related Work --- p.6
Chapter 2.1 --- Sequence query processing --- p.8
Chapter 2.2 --- Text sequence searching --- p.8
Chapter 2.3 --- Numerical sequence searching --- p.11
Chapter 2.4 --- Indexing schemes --- p.17
Chapter 3 --- Sequence Data Searching using the Projection Algorithm --- p.21
Chapter 3.1 --- Sequence Similarity --- p.21
Chapter 3.2 --- Searching Method --- p.24
Chapter 3.2.1 --- Sequential Algorithm --- p.24
Chapter 3.2.2 --- Projection Algorithm --- p.25
Chapter 3.3 --- Handling Scaling Problem by the Projection Algorithm --- p.33
Chapter 4 --- Sequence Data Searching using Hashing Algorithm --- p.37
Chapter 4.1 --- Sequence Similarity --- p.37
Chapter 4.2 --- Hashing algorithm --- p.39
Chapter 4.2.1 --- Motivation of the Algorithm --- p.40
Chapter 4.2.2 --- Hashing Algorithm using dynamic hash function --- p.44
Chapter 4.2.3 --- Handling Scaling Problem by the Hashing Algorithm --- p.47
Chapter 5 --- Comparisons between algorithms --- p.50
Chapter 5.1 --- Performance comparison with the sequence searching algorithms --- p.54
Chapter 5.2 --- Comparison between indexing structures --- p.54
Chapter 5.3 --- Comparison between sequence searching algorithms in coping some deficits --- p.55
Chapter 6 --- Performance Evaluation --- p.58
Chapter 6.1 --- Performance Evaluation using Projection Algorithm --- p.58
Chapter 6.2 --- Performance Evaluation using Hashing Algorithm --- p.61
Chapter 7 --- Conclusion --- p.66
Chapter 7.1 --- Motivation of the thesis --- p.66
Chapter 7.1.1 --- Insufficiency of Euclidean distance --- p.67
Chapter 7.1.2 --- Insufficiency of orthonormal transforms --- p.67
Chapter 7.1.3 --- Insufficiency of multi-dimensional indexing structure --- p.68
Chapter 7.2 --- Major contribution --- p.68
Chapter 7.2.1 --- Projection algorithm --- p.68
Chapter 7.2.2 --- Hashing algorithm --- p.69
Chapter 7.3 --- Future work --- p.70
Bibliography --- p.71
Tsai, Tienwei, and 蔡殿偉. "Fast Content-Based Image Retrieval in Discrete Cosine Transform Domain." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/28015857696067434299.
Full text大同大學
資訊工程學系(所)
94
As vastly increasing amount of digital images, rapidly declining cost of storage, and explosive growth of the Internet, content-based image retrieval (CBIR) has been intensively studied in the last decades. Though a number of image features based on color, texture, and shape attributes in various domains have been reported in the literature, it is still a rigorous challenge to select a good feature set for image classification. In this thesis, some famous CBIR systems are reviewed, and related issues in the retrieval strategy are addressed. The effective indexing and efficient retrieval are identified as a problem, which serves as the most important criterion in choosing the feature set. Our work mainly focuses on the use of discrete cosine transform (DCT) as a contribution to fast indexing and retrieval in a CBIR system. We will first show the effective representation of images in DCT domain. Then, to further improve the retrieval speed, a two-stage approach based on DCT is proposed. As the character can be regarded as a gray image, the concept of the two-stage approach is also successfully applied to the recognition of Chinese characters. In addition, a set of weights are used to characterize the relative importance of the features in a query image, which plays an important role in the multiple passes of refining the retrieval. An intensive study of such flexible retrieval, called the fuzzy semantic information retrieval model, is realized in a bird searching system. Finally, the prospects of further work based on the findings of the study are given as a conclusion.
xiao, Sheng-wen, and 蕭聖文. "Feature Extraction of Visualized Genomic Sequences for Fast Database Retrieval." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/20071949675011369929.
Full text國立雲林科技大學
電機工程系碩士班
93
Genomic signal processing has become a more significant research. The symbolic genomic sequences are translated numerical sequence in different methods. For visualization purpose DNA sequences can be mapped to a series of coordinate numbers. The sequence can be represented by a 3-D curve, which is depicted by the use of the accumulated 3-D coordinates. In this thesis, we propose two methods to extract the features of the 3-D curve. Then we can construct a database for the features of DNA sequences. For an unknown sequence, we can plot a 3-D curve and extract the corresponding features. By searching the feature database of the DNA sequences, an identical if exist sequence or similar sequences can be efficiently retrieved. In this thesis, visualized curve can be extract features of shape and twist points of curve. Alignment sequences use NNIC and RDCSW algorithms. Using this method, we can decrease quantity of the database and increase the speed of retrieval.
Santos, Joaquim Miguel Nunes dos. "Accelerating Digitisation of Biological Collections for Fast Ecological Information Retrieval." Master's thesis, 2018. http://hdl.handle.net/10316/86223.
Full textHerbários são colecções biológicas de plantas, algas, fungos e líquenes preservados para fins científicos. A troca de informação e uma rápida comunicação são fundamentais para acelerar a investigação em biodiversidade. Os principais herbários do mundo estão a concentrar esforços para digitalizar as suas colecções e tornar disponível a informação online.Ao longo da última década, o Herbário da Universidade de Coimbra (COI – acrónimo no Index Herbariorum) tem envidado esforços para disponibilizar online a informação da sua colecção de plantas que totaliza cerca de 800.000 exemplares. Contudo, apenas cerca de 10% foi processado até à data, em parte devido à morosidade dos métodos geralmente utilizados em herbários. Este trabalho pretende contribuir para acelerar o processo de digitalização, quer pela melhoria do processo de digitalização, quer permitindo aos cidadãos o preenchimento da base de dados. Pretende ainda acelerar a obtenção de informação com valor ecológico a partir da base de dados.Para conseguir isso, um novo fluxo de trabalho foi desenvolvido de modo a criar registos na base de dados de forma automática a partir de lotes de imagens digitais, foi desenvolvido um novo catálogo online, e foi criada uma plataforma colaborativa para permitir, em ambiente web, a transcrição das etiquetas de herbário com base nas imagens digitais.Demonstra-se que este trabalho fornece um aumento substancial na quantidade de exemplares digitalizados, e reduz o tempo necessário à obtenção de informação de modo a que possa ser usada não apenas por cientistas, mas também por decisores, outras partes interessadas e público em geral.Apesar de colateral, há ainda uma vantagem considerável e única neste projecto. A aplicação colaborativa pode ser usada como uma ferramenta para fazer correcções ao Catálogo directamente online e de uma forma fácil. Isto melhora de uma forma rápida a base de dados, uma vez que este procedimento fácil potencia este tipo de contribuições.
Herbaria are biological collections of preserved plants, algae, fungi and lichens used for scientific purposes. Fast communication and information exchange are fundamental to accelerate the investigation on biodiversity. The major world herbaria are concentrating efforts to digitise their collections and making available the information online.Over the last decade, the Herbarium of the University of Coimbra (COI – acronym in Index Herbariorum) has made efforts to make available online the information of its plant collection of c. 800.000 specimens. However, only c. 10% is processed to this date, in part due to the slowness of the methods generally used in herbaria. This work aims to contribute to accelerate the digitising process, both by improving digitising procedures and allowing citizen partnership to populate COI database. It also aims to accelerate information retrieval with ecological value from database.To accomplish that, a new workflow was developed to automatically create records in the database from batches of digital images, a new user-friendly online catalogue was developed, and a collaborative platform was developed to allow transcription of specimen labels based on digital images in a web environment.It is demonstrated that this work provides a substantial increase in the amount of digitised specimens, and also reduces the time to retrieve precise information in a way that can be used not only by scientists, but also by decision makers, stake holders and public in general.Although collateral, there is a major, and unique, advantage to this project. The Collaborative application can be used as a tool to make corrections to the Catalogue, easily and directly online. This quickly improves the database as such effortless procedure increases this kind of contributions.
Chang, Yu-ruey, and 張育瑞. "Fast Cover Song Retrieval in AAC Domain based on Deep Learning." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16552950178279839832.
Full text國立中央大學
通訊工程學系
104
With the increasing of multimedia data, it becomes more and more important to quickly search the interests from large databases. Keyword annotation is the traditional approach, but it needs large amount of manual effort to annotate the keyword. As the size of data increases, the keyword annotation approach becomes infeasible. Content-based retrieval is more natural, it extracts features from music content to create a representation that overcomes human labeling errors. This thesis focuses on the AAC file which is widely used by streaming internet sources. Here, the proposed system directly maps the modified discrete cosine transform coefficients (MDCT) into a 12-dimensional chroma feature. We combine frames to a segment as the input of deep learning, deep learning can automatically find more meaningful features of music data. We also applied sparse autoencoder to reduce dimensionality of songs. With these efforts, significant matching time can be saved. The experimental results show that the proposed method can reach 0.505 of mean reciprocal rank (MRR) and save over 70% matching time compared with conventional approaches.
Tsai, Wei-Chang, and 蔡維昌. "Non-linear Motion Blurred Image Reconstruction based on Fast PSF Retrieval." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/7y5fg3.
Full text國立虎尾科技大學
資訊工程研究所
101
In everyday life, when people take a photograph by any kinds of cameras, motion blurred image often happens caused by camera shake. Owing to exposures when captured an image within camera shake, it will make a kind of motion blurred image, this kind of motion blurred phenomenon is often a non-liner motion action. Motion blur always causes the degradation of image quality, as long as the users using the hand-held camera often have a similar experience. For this reason, reconstructing a blurred image into a sharp image will be the main objective in this thesis. In the past studies, nonlinear motion blur will be modeled as point spread function(PSF) which called blur kernel. This thesis aims to the reconstruction of the global motion blur image caused by a single blur kernel. Secondary, the proposed method is further extended to reconstruct a multi-blurred image caused by the multiple kernels. However, reconstruction a motion blurred image is an ill-pose problem. In state-of-the-art motion blurred estimation methods, these algorithms usually use the recursive method to estimate motion blur kernel. But, the recursive process is quite time-consuming. In order to reduce the execution time, based on iterative phase retrieval algorithm and normalized sparsity measure, we propose a fast best kernel retrieval algorithm based on fast point spread function (PSF) involved iterative phase retrieval method and normalized sparsity measure, which can find the best kernel in a short computing time. Experiment results verify that the proposed method can effectively reduce the execution time and obtain the best motion blur kernel and maintain a high quality of image deblurring. Finally, this proposed algorithm also applies to deblur multiple blurred cases. The deblurring results are acceptable.
Grauman, Kristen, and Trevor Darrell. "Fast Contour Matching Using Approximate Earth Mover's Distance." 2003. http://hdl.handle.net/1721.1/30438.
Full textLai, Wei-Ta, and 賴威達. "A Study of Typhoon Satellite Image Database Fast Retrieval and Path Reconstruction." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/91190173381521897091.
Full text大同大學
資訊工程學系(所)
96
Pacific typhoon refers to tropical cyclone which is a storm system that produces violent winds and flooding rains. Typhoon inflicts terrible damage due to thunderstorms, violent winds, torrential rain, flooding and extremely high tides. Improving the early typhoon forecast capability is a key to the disaster prevention. In this thesis, we implemented a system allows general public or meteorologist check the continuous or cumulative movement of past typhoon. To make such scenario possible, we extract typhoon image features one by one and store as descriptor in XML syntax. XML is heavily used as format for document storage and sharing through internet because of its characteristic. Further more, in order to present the desire typhoon image of user’s selection without transmission of the entire satellite image, we used block patterns to reconstruct typhoon image to reduce transmission time. Many scholars made efforts in locating typhoon center and developed some reliable typhoon fast search and path reconstruction mechanisms. Since typhoon location is an important influencing factor, we propose a method that applies MPEG-7 edge histogram descriptor to extract texture feature for the enhancement of typhoon locating.
Wu, Shien-Cheng, and 吳信誠. "A Fast Image Retrieval System Using High Order Fuzzy Statistics Model Parameters." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/88961391965509188780.
Full text國立成功大學
資訊工程學系碩博士班
90
With the growing sizes of today’s digital image database, fast retrieval methods are mandatory. Though shape and color are the most popular features in many retrieval systems, it is possible that other abstract representations can also be used to extract additional useful information for this application. For example, quite a few high order statistics methods were successfully used in texture image classification. In this paper, we present a fast retrieval system using a maximally simplified fuzzy parametrical statistic representation and a color region representation, and combine them be with a 2-level matching strategy based on an input reference image. The proposed statistics model is used to explore the spatial relationship among the neighboring image pixels. Among the retrieval images, some of them are highly related to the input reference image even in a human point of view.
Tu, Yu-Ming, and 凃昱銘. "Query-by-humming Retrieval of Songs Based on Fast Pitch Sequence Matching." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/4yepbe.
Full text國立臺北科技大學
電腦與通訊研究所
100
As the concrete descriptions, such as title, singer or lyrics, cannot fully represent the abstract content of music, such as melody or emotion, it is often the case that people know what the song they want sounds like, but just cannot recall its title or lyrics. To overcome this problem, a promising solution is the so-called query-by- singing/humming (QBSH), which allows users to retrieve a song by simply singing or humming a fragment of the song. Although techniques on QBSH have been studied for more than one decade, they are still far from popular in real applications. This thesis investigates a QBSH method that enables fast melody comparison. The basic idea is to measure the distances between note sequences in the frequency domain instead of time domain. Thanks to the merit of fast Fourier transform, we can convert different-length note sequences into equal-dimension vectors via zero padding. The equal dimensionality allows us to compare the vectors using Euclidean distance directly, which avoids performing time-consuming alignment between sequences. To take both efficiency and effectiveness into account, the proposed fast melody comparison method is combined with dynamic time warping technique into a two-stage sequence matching system. Our experiments conducted using the MIREX 2006 database demonstrate the superiority of the proposed system over other existing systems.
Cakir, Fatih. "Online hashing for fast similarity search." Thesis, 2017. https://hdl.handle.net/2144/27360.
Full textLee, Ping-Huang, and 李炳煌. "Design of An Efficient Object-based Image Retrieval System Using A Fast K-NNR Search Technique." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/10153426625814643471.
Full text國立高雄第一科技大學
電腦與通訊工程所
90
As the advances of the Internet, the demand of storing multimedia information (such as text, image, audio, and video) has increased. And the multimedia retrieval and search is more and more important. Traditionally, textual features such as filenames, captions, and keywords have been used to annotate and retrieve images. As it is applied to a large database, the use of keywords becomes not only cumbersome but also inadequate to represent the image content. Therefore, many content-based image retrieval system have been proposed to solve this problem. In this thesis, we propose an efficient object-based image retrieval method by using a fast K-NNR search algorithm which is designed according to the triangle inequality principle. The computational complexity of the traditional histogram-based image retrieval which is also improved by fast K-NNR search method is high due to the usage of the high-dimensional histogram and the lack of the indexing structure. Furthermore, a new indexing structure for the proposed object-based image retrieval technique is also proposed in this study. A special attention to object segmentation which combines the moment-preserving edge detection and the region-growing techniques is paid in this thesis. Finally, an object-based similarity metric is also proposed for query processing. Experimental results show that the proposed image retrieval method is effective and superior to other methods in terms of overall computational complexity. Applying to a very large image database, the performance of the proposed method can be sustained.
Pesavento, Marius [Verfasser]. "Fast algorithms for multidimensional harmonic retrieval = Schnelle Algorithmen zur Erkennung von mehrdimensionalen Harmonischen / by Marius Pesavento." 2005. http://d-nb.info/975415328/34.
Full textSmith, Nadia. "Air quality monitoring with polar-orbiting hyperspectral infrared sounders : a fast retrieval scheme for carbon monoxide." Thesis, 2014. http://hdl.handle.net/10210/12282.
Full textThe Infrared Atmospheric Sounding Interferometer (lASI), operational in polar-orbit since 2006 on the European MetOp-A satellite, is the most advanced of its kind in space. It has been designed to provide soundings of the troposphere and lower stratosphere at nadir in a spectral interval of 0.25 em" across the range 645-2 760 em". Fine spectral sampling such as this is imperative in the sounding of trace gases. Since its launch, the routine retrievals of greenhouse, species from IASI measurements have made a valuable contribution to atmospheric chemistry studies at a global scale. The main contribution of this thesis is the development of a new trace gas retrieval scheme for IASI measurements. The goal was to improve on the global operational scheme in terms of the algorithm complexity, speed of calculation and spatial resolution achieved in the final solution. This schemedirectly retrieves column integrated trace gas densities at single field-of-view (FOV) from IASI measurements within a 10% accuracy limit. The scheme is built on the Bayesian framework of probability and based on the assumption that the inversion of total column values, as apposed to gas profiles, is a near-linear problem. Performance of the retrieval scheme is demonstrated on simulated noisy measurements for carbon monoxide (CO). Being a linear solution, the scheme is'highly dependent on the accuracy of the a priori. A statistical estimate of the a priori was computed using a principal component regression analysis with 50 eigenvectors. The corresponding root-mean-square (RMS) error of the a priori was calculated to be 9.3%. In general terms, the physical retrieval improved on the a priori, and sensitivity studies were performed to demonstrate the accuracy and stability of the retrieval scheme under a numberof perturbations. A full system characterization and error analysis is additionally preformed to elicidate the nature of this complex problem. The hyperspectral IASI measurements introduce a significant correlation error in the retrieval. The Absorption Line Cluster (ALC) channel selection method was developed in this thesis, to address the correlation error explicitly. When a first neighbour correlation factor of 0.71 is assumed in the measurement error covariance for the clusters of ALC channels, then most of the correlation error is removed in the retrieval. In conclusion, the total column trace gas retrieval scheme developed here is fast, simple, intuitive, transparent and robust. These characteristics together make it highly suitable for implementation in an operational environment intended for air quality monitoring on a regional scale.
Yen, Shuo-Fu, and 顏碩甫. "A Fast Cloud Large-Scale Image Retrieval System Using Weighted-Inverted Index and Database Filtering Algorithm." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/n5r2t2.
Full text國立臺灣科技大學
電機工程系
104
With the advance of multimedia technology and communications, images and videos become the major streaming information through the Internet. How to fast retrieve desired similar images precisely from the Internet scale image/video databases (Big Data) is the most important retrieval control target. In this paper, a cloud based content-based image retrieval (CBIR) scheme is presented. To speed up the features matching process for large scale CBIR, we proposed to perform Database-Categorizing based on Weighted-Inverted Index (DCWII) and Database Filtering Algorithm (DFA). In the DCWII, it assigns weights to DCT coefficients histograms and categorizes the database by weighted features. In addition, the DFA filters out irrelevant image in the database to reduce unnecessary computation loading for features matching. Experiments showed that the proposed CBIR scheme outperforms previous works in the Precision-Recall performance and maintains mean average precision (mAP) about 0.678 in the large-scale database comprising one mega images. Our scheme also can reduce about 55%~70% retrieval time by pre-filtering the database, which helps to improve efficiency of retrieval system.
Huang, Xin. "Retrieval of Non-Spherical Dust Aerosol Properties from Satellite Observations." Thesis, 2013. http://hdl.handle.net/1969.1/151193.
Full textHsieh, Shang-Wei, and 謝尚偉. "An Information Retrieval System for Fast Exploration of Proprietary Experimental Data via Searching and Mining the Biomedical Entities in Related Public Literatures." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/48936134422468587956.
Full text國立臺灣大學
工程科學及海洋工程學研究所
102
In the beginning of biomedical research works, mapping researchers’ proprietary experiment data to public research literatures is an important work. In this paper, a search engine is proposed to retrieve large scale biomedical literatures which are collected from PubMed in an efficient way. Moreover, we apply a name entity recognition tool which is a kind of text-mining technique to extract protein names from the biomedical literatures. Afterward, the protein names are normalized to IDs which can be linked to the researchers’ proprietary experiment databases and using web techniques automatically plot the charts for the relevant proprietary data; through these processes, the researchers can efficiently get the relevance between their proprietary data and the public papers also can help them to find more available research works.
Wang, Chenxi. "Investigation of Thin Cirrus Cloud Optical and Microphysical Properties on the Basis of Satellite Observations and Fast Radiative Transfer Models." Thesis, 2013. http://hdl.handle.net/1969.1/151213.
Full textLin, Ci-Jie, and 林祺傑. "Aspect Retrieval and Integration for News Fact." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/46225273449222520120.
Full text國立臺灣師範大學
資訊工程學系
104
Internet speeds up the flow of information. News media has replaced traditional newspaper and magazines to spread information online in recent years. However, users have to take much time and effort to get exact fact information from the news documents because the news documents collected from different news media have similar content but may also provide additional facts specifically. For solving this problem, we propose a method to automatically extract and integrate fact information of news documents. The candidates of fact sentences are picked out by extracting the keywords of topics from news contents. Then, various features of the candidate sentences are used to perform classification to identify the fact sentences. In order to provide fact information, the triples consisting of facet term, relation term, and description term, are extracted by using a natural language tool on the topic sentences. Then the similarity of the facet terms between two triples is used to cluster the extracted triples by agglomerative hierarchical clustering. For each cluster of triples, we use the incremental method to combine each pair of triples which have similar facet or description terms in order to provide integrated fact information. The result of performance evaluation shows that the methods of fact sentences extraction, triple extraction and combination all get good performance. The proposed approach can effectively integrate facet information from different news documents, which provides users a comprehensive understanding of news documents.
Wu, Ying-Hui, and 吳盈慧. "Unsupervised Fact-checking Retrieval Model : A Real Case Study." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/egw43g.
Full text國立臺灣大學
資訊工程學研究所
107
As the trend of Internet and social media goes, the way for people to gain access to information has been entirely evolved. People nowadays deliver and receive messages through online platforms anytime and anywhere. However, the convenience also causes severe problems about fake news and the rapid spread of misinformation. Such transmits are harmful to the society. While inaccurate health care tips and rumors trouble personal lives, misinterpret assertions and fabricate claims obstruct communication about public issues, leading to national security risks. In order to decrease the overspreading of fake news on the Internet and telecommunication platforms, the study attempted to discover the characteristic of fact-checking, design a high accurate unsupervised information retrieval system which can be applied in practice. The source of dataset refered to two major fact-checking organizations in Taiwan, ”Cofacts” and ”Taiwan Fact Check Center”. The study functioned word embedding model to attain query expansion. Chinese text segmentation optimizing and keyword weight tuning were implemented by applying named entity recognition on Wikipedia titles. The final fact-checking retrieval model was developed based on Okapi BM25, word embedding and named entity recognition keyword weighting. After experiments and parameter optimization, the result shows that the mixture model performs better and the design is practical for real cases.
Walles, Rena L. "Effects of web-based tutoring software on math test performance : a look at gender, math-fact retrieval ability, spatial ability and type of help." 2005. https://scholarworks.umass.edu/theses/2425.
Full textCuello, Eliana Marysel. "Algoritmos y análisis de imágenes no convencionales de rayos X." Bachelor's thesis, 2011. http://hdl.handle.net/11086/72.
Full textEn este trabajo se estudian diferentes algoritmos de reconstrucción de imágenes basadas en un analizador de rayos X, con el objeto de separar los efectos de absorción, refracción y dispersión a ultra bajo ángulo, como producto de la interacción del haz con la muestra. Se implementaron estos métodos sobre imágenes medidas en el Laboratorio Nacional de Luz Sincrotrón de Campinas, Brasil, de una muestra especialmente diseñada para realzar los efectos de interacción anteriormente mencionados. Se analizaron los resultados cualitativa y cuantitativamente, mostrando que dependiendo del efecto realzado, existe un método más eficiente respecto de los otros.
Eliana Marysel Cuello.