Dissertations / Theses on the topic 'Text indexing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Text indexing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
He, Meng. "Indexing Compressed Text." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1143.
Full textSani, Sadiq. "Role of semantic indexing for text classification." Thesis, Robert Gordon University, 2014. http://hdl.handle.net/10059/1133.
Full textBowden, Paul Richard. "Automated knowledge extraction from text." Thesis, Nottingham Trent University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298900.
Full textMick, Alan A. "Knowledge based text indexing and retrieval utilizing case based reasoning /." Online version of thesis, 1994. http://hdl.handle.net/1850/11715.
Full textLester, Nicholas, and nml@cs rmit edu au. "Efficient Index Maintenance for Text Databases." RMIT University. Computer Science and Information Technology, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070214.154933.
Full textChung, EunKyung. "A Framework of Automatic Subject Term Assignment: An Indexing Conception-Based Approach." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5473/.
Full textHaouam, Kamel Eddine. "RVSM A rhetorical conceptual model for content-based indexing and retrieval of text document." Thesis, London Metropolitan University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517132.
Full textZhu, Weizhong Allen Robert B. "Text clustering and active learning using a LSI subspace signature model and query expansion /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3077.
Full textThachuk, Christopher Joseph. "Space and energy efficient molecular programming and space efficient text indexing methods for sequence alignment." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44172.
Full textHon, Wing-kai. "On the construction and application of compressed text indexes." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B31059739.
Full textHon, Wing-kai, and 韓永楷. "On the construction and application of compressed text indexes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31059739.
Full textGeiss, Johanna. "Latent semantic sentence clustering for multi-document summarization." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609761.
Full textAhlgren, Per. "The effects of indexing strategy-query term combination on retrieval effectiveness in a Swedish full text database." Doctoral thesis, University College of Borås, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171411.
Full textQC 20150813
Tam, Wai I. "Compression, indexing and searching of a large structured-text database in a library monitoring and control system (LiMaCS)." Thesis, University of Macau, 1998. http://umaclib3.umac.mo/record=b1636991.
Full textTsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-202687.
Full textTsatsaronis, George. "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition." BioMed Central, 2015. https://tud.qucosa.de/id/qucosa%3A29496.
Full textSkeppstedt, Maria. "Extracting Clinical Findings from Swedish Health Record Text." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-109254.
Full textTarczyńska, Anna. "Methods of Text Information Extraction in Digital Videos." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2656.
Full textThe huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval.
Hassel, Martin. "Resource Lean and Portable Automatic Text Summarization." Doctoral thesis, Stockholm : Numerisk analys och datalogi Numerical Analysis and Computer Science, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4414.
Full textToth, Róbert. "Přibližné vyhledávání řetězců v předzpracovaných dokumentech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236122.
Full textZheng, Ning. "Discovering interpretable topics in free-style text diagnostics, rare topics, and topic supervision /." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199237529.
Full textWeldeghebriel, Zemichael Fesahatsion. "Evaluating and comparing search engines in retrieving text information from the web." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/53740.
Full textENGLISH ABSTRACT: With the introduction of the Internet and the World Wide Web (www), information can be easily accessed and retrieved from the web using information retrieval systems such as web search engines or simply search engines. There are a number of search engines that have been developed to provide access to the resources available on the web and to help users in retrieving relevant information from the web. In particular, they are essential for finding text information on the web for academic purposes. But, how effective and efficient are those search engines in retrieving the most relevant text information from the web? Which of the search engines are more effective and efficient? So, this study was conducted to see how effective and efficient search engines are and to see which search engines are most effective and efficient in retrieving the required text information from the web. It is very important to know the most effective and efficient search engines because such search engines can be used to retrieve a higher number of the most relevant text web pages with minimum time and effort. The study was based on nine major search engines, four search queries and relevancy judgments as relevant/partly-relevanUnon-relevant. Precision and recall were calculated based on the experimental or test results and these were used as basis for the statistical evaluation and comparisons of the retrieval effectiveness of the nine search engines. Duplicated items and broken links were also recorded and examined separately and were used as an additional measure of search engine effectiveness. A response time was also recorded and used as a base for the statistical evaluation and comparisons of the retrieval efficiency of the nine search engines. Additionally, since search engines involve indexing and searching in the information retrieval processes from the web, this study first discusses, from the theoretical point of view, how the indexing and searching processes are performed in an information retrieval environment. It also discusses the influences of indexing and searching processes on the effectiveness and efficiency of information retrieval systems in general and search engines in particular in retrieving the most relevant text information from the web.
AFRIKAANSE OPSOMMING: Met die koms van die Internet en die Wêreldwye Web (www) is inligting maklik bekombaar. Dit kan herwin word deur gebruik te maak van inligtingherwinningsisteme soos soekenjins. Daar is 'n hele aantal sulke soekenjins wat ontwikkel is om toegang te verleen tot die hulpbronne beskikbaar op die web en om gebruikers te help om relevante inligting vanaf die web in te win. Dit is veral noodsaaklik vir die verkryging van teksinligting vir akademiese doeleindes. Maar hoe effektief en doelmatig is die soekenjins in die herwinning van die mees relevante teksinligting vanaf die web? Watter van die soekenjins is die effektiefste? Hierdie studie is onderneem om te kyk watter soekenjins die effektiefste en doelmatigste is in die herwinning van die nodige teksinligting. Dit is belangrik om te weet watter soekenjin die effektiefste is want so 'n enjin kan gebruik word om 'n hoër getal van die mees relevante tekswebblaaie met die minimum van tyd en moeite te herwin. Heirdie studie is baseer op die sewe hoofsoekenjins, vier soektogte, en toepasliksheidsoordele soos relevant /gedeeltelik relevant/ en nie- relevant. Presiesheid en herwinningsvermoë is bereken baseer op die eksperimente en toetsresultate en dit is gebruik as basis vir statistiese evaluasie en vergelyking van die herwinningseffektiwiteit van die nege soekenjins. Gedupliseerde items en gebreekte skakels is ook aangeteken en apart ondersoek en is gebruik as bykomende maatstaf van effektiwiteit. Die reaksietyd is ook aangeteken en is gebruik as basis vir statistiese evaluasie en die vergelyking van die herwinningseffektiwiteit van die nege soekenjins. Aangesien soekenjins betrokke is by indeksering en soekprosesse, bespreek hierdie studie eers uit 'n teoretiese oogpunt, hoe indeksering en soekprosesse uitgevoer word in 'n inligtingherwinningsomgewing. Die invloed van indeksering en soekprosesse op die doeltreffendheid van herwinningsisteme in die algemeen en veral van soekenjins in die herwinning van die mees relevante teksinligting vanaf die web, word ook bespreek.
ABEYSINGHE, RUVINI PRADEEPA. "SIGNATURE FILES FOR DOCUMENT MANAGEMENT." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin990539054.
Full textHenriksson, Aron. "Semantic Spaces of Clinical Text : Leveraging Distributional Semantics for Natural Language Processing of Electronic Health Records." Licentiate thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-94344.
Full textDe stora mängder kliniska data som genereras i patientjournalsystem är en underutnyttjad resurs med en enorm potential att förbättra hälso- och sjukvården. Då merparten av kliniska data är i form av ostrukturerad text, vilken är utmanande för datorer att analysera, finns det ett behov av sofistikerade metoder som kan behandla kliniskt språk. Metoder som inte kräver märkta exempel utan istället utnyttjar statistiska egenskaper i datamängden är särskilt värdefulla, med tanke på den begränsade tillgången till annoterade korpusar i den kliniska domänen. System för informationsextraktion och språkbehandling behöver innehålla viss kunskap om semantik. En metod går ut på att utnyttja de distributionella egenskaperna hos språk – mer specifikt, statistisk över hur termer samförekommer – för att modellera den relativa betydelsen av termer i ett högdimensionellt vektorrum. Metoden har använts med framgång i en rad uppgifter för behandling av allmänna språk; dess tillämpning i den kliniska domänen har dock endast utforskats i mindre utsträckning. Genom att tillämpa modeller för distributionell semantik på klinisk text kan semantiska rum konstrueras utan någon tillgång till märkta exempel. Semantiska rum av klinisk text kan sedan användas i en rad medicinskt relevanta tillämpningar. Tillämpningen av distributionell semantik i den kliniska domänen illustreras här i tre användningsområden: (1) synonymextraktion av medicinska termer, (2) tilldelning av diagnoskoder och (3) identifiering av läkemedelsbiverkningar. Det krävs dock att vissa begränsningar eller utmaningar adresseras för att möjliggöra en effektiv tillämpning av distributionell semantik på ett brett spektrum av uppgifter som behandlar språk – både allmänt och, i synnerhet, kliniskt – såsom hur man kan modellera betydelsen av flerordstermer och redogöra för funktionen av negation: ett enkelt sätt att modellera parafrasering och negation i ett distributionellt semantiskt ramverk presenteras och utvärderas. Idén om ensembler av semantisk rum introduceras också; dessa överträffer användningen av ett enda semantiskt rum för synonymextraktion. Den här metoden möjliggör en kombination av olika modeller för distributionell semantik, med olika parameterkonfigurationer samt inducerade från olika korpusar. Detta är inte minst viktigt i den kliniska domänen, då det gör det möjligt att komplettera potentiellt begränsade mängder kliniska data med data från andra, mer lättillgängliga källor. Arbetet påvisar också vikten av att konfigurera dimensionaliteten av semantiska rum, i synnerhet när vokabulären är omfattande, vilket är vanligt i den kliniska domänen.
High-Performance Data Mining for Drug Effect Detection (DADEL)
Valio, Felipe Braunger 1984. "Detecção rápida de legendas em vídeos utilizando o ritmo visual." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275733.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T05:52:55Z (GMT). No. of bitstreams: 1 Valio_FelipeBraunger_M.pdf: 3505580 bytes, checksum: 3b20a046a5822011c617729904457d95 (MD5) Previous issue date: 2011
Resumo: Detecção de textos em imagens é um problema que vem sendo estudado a várias décadas. Existem muitos trabalhos que estendem os métodos existentes para uso em análise de vídeos, entretanto, poucos deles criam ou adaptam abordagens que consideram características inerentes dos vídeos, como as informações temporais. Um problema particular dos vídeos, que será o foco deste trabalho, é o de detecção de legendas. Uma abordagem rápida para localizar quadros de vídeos que contenham legendas é proposta baseada em uma estrutura de dados especial denominada ritmo visual. O método é robusto à detecção de legendas com respeito ao alfabeto utilizado, ao estilo de fontes, à intensidade de cores e à orientação das legendas. Vários conjuntos de testes foram utilizados em nosso experimentos para demonstrar a efetividade do método
Abstract: Detection of text in images is a problem that has been studied for several decades. There are many works that extend the existing methods for use in video analysis, however, few of them create or adapt approaches that consider the inherent characteristics of video, such as temporal information. A particular problem of the videos, which will be the focus of this work, is the detection of subtitles. A fast method for locating video frames containing captions is proposed based on a special data structure called visual rhythm. The method is robust to the detection of legends with respect to the used alphabet, font style, color intensity and subtitle orientation. Several datasets were used in our experiments to demonstrate the effectiveness of the method
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Civera, Saiz Jorge. "Novel statistical approaches to text classification, machine translation and computer-assisted translation." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/2502.
Full textCivera Saiz, J. (2008). Novel statistical approaches to text classification, machine translation and computer-assisted translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2502
Palancia
Vasireddy, Jhansi Lakshmi. "Applications of Linear Algebra to Information Retrieval." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/71.
Full textSILVA, Israel Batista Freitas da. "Representações cache eficientes para índices baseados em Wavelet trees." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21050.
Full textMade available in DSpace on 2017-08-30T19:22:34Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5) Previous issue date: 2016-12-12
CNPQ, FACEPE.
Hoje em dia, há um exponencial crescimento do volume de informação no mundo. Esta explosão cria uma demanda por técnicas mais eficientes de indexação e consulta de dados, uma vez que, para serem úteis, eles precisarão ser manipuláveis. Casamento de padrões se refere à busca de um texto menor (padrão) em um texto muito maior (texto), reportando a quantidade de ocorrências e/ou as localizações das ocorrências. Para tal, pode-se construir uma estrutura chamada índice que pré-processará o texto e permitirá que consultas sejam feitas eficientemente. A eficiência prática de um índice, além da sua eficiência teórica, pode definir o quão utilizado ele será, e isto está diretamente ligado a como ele se comporta nas arquiteturas dos computadores atuais. O principal objetivo deste estudo é analisar o uso da estrutura Wavelet Tree como índice avaliando o impacto da reorganização interna dos seus dados quanto à localidade espacial e, assim propor formas de organização que reduzam efetivamente a quantidade de cache misses ocorridos na execução de operações neste índice. Através de análises empíricas com dados simulados e dados textuais obtidos de dois repositórios públicos, avaliou-se alguns aspectos de cinco tipos de organizações para os dados da estrutura com o objetivo de compará-las quanto ao tempo de execução e quantidade de cache misses ocorridos. Adicionalmente, uma análise teórica da complexidade da quantidade de cache misses ocorridos para operação de consulta de um padrão é descrita para uma das organizações propostas. Dois experimentos realizados sugerem comportamentos assintóticos para duas das organizações analisadas. Um terceiro experimento executado mostra que, para quatro das cinco organizações apresentadas, houve uma sistemática redução na quantidade de cache misses ocorridos para a cache de menor nível. Entretanto a redução de cache misses para cache de menor nível não se refletiu integralmente numa diferença no tempo de execução das operações, tendo sido esta menos significativa, nem na quantidade de cache misses ocorridos na cache de maior nível, onde houveram variações positivas e negativas.Os resultados obtidos permitem concluir que a escolha de uma representação adequada pode acarretar numa melhora significativa de utilização da cache. Diferentemente do modelo teórico, o custo de acesso à memória responde apenas por uma fração do tempo de computação das operações sobre as Wavelet Trees, pelo que a diminuição no número de cache misses não se traduziu integralmente no tempo de execução. No entanto, este fator pode ser crítico em situações mais extremas de utilização de memória.
Today, there is an exponential growth in the volume of information in the world. This increase creates the demand for more efficient indexing and querying techniques, since, to be useful, that data needs to be manageable. Pattern matching means searching for a string (pattern) in a much bigger string (text), reporting the number of occurrences and/or its locations. To do that, we need to build a data structure known as index. This structure will preprocess the text to allow for efficient queries. The adoption of an index depends heavily on its efficiency, and this is directly related to how well it performs on current machine architectures. The main objective of this work is to analyze the Wavelet Tree data structure as an index, assessing the impact of its internal organization with respect to spatial locality, and propose ways to organize its data as to reduce the amount of cache misses incurred by its operations. We performed an empirical analysis using both real and simulated textual data to compare the running time and cache behavior of Wavelet Trees using five different proposals of internal data layout. A theoretical analysis about the cache complexity of a query operation is also presented for the most efficient layout. Two experiments suggest good asymptotic behavior for two of the analyzed layouts. A third experiment shows that for four of the five layouts, there was a systematic reduction in the number of cache misses for the lowest level cache. Despite this, this reduction was not reflected in the runtime, neither in the performance for the highest level cache. The results obtained allow us to conclude that the choice of a suitable layout can lead to a significant improvement in cache usage. Unlike the theoretical model, however, the cost of memory access only accounts for a fraction of the operations’ computation time on the Wavelet Trees, so the decrease in the number of cache misses did not translate fully into gains in the execution time. However, this factor can still be critical in more extreme memory utilization situations.
Puigcerver, I. Pérez Joan. "A Probabilistic Formulation of Keyword Spotting." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/116834.
Full text[CAT] La detecció de paraules clau (Keyword Spotting, en anglès), aplicada a documents de text manuscrit, té com a objectiu recuperar els documents, o parts d'ells, que siguen rellevants per a una certa consulta (query, en anglès), indicada per l'usuari, dintre d'una gran col·lecció de documents. La temàtica ha recollit un gran interés en els últims 20 anys entre investigadors en Reconeixement de Formes (Pattern Recognition), així com biblioteques i arxius digitals. Aquesta tesi defineix l'objectiu de la detecció de paraules claus a partir d'una perspectiva basada en la Teoria de la Decisió i una formulació probabilística adequada. Més concretament, la detecció de paraules clau es presenta com un cas concret de Recuperació de la Informació (Information Retrieval), on el contingut dels documents és desconegut, però pot ser modelat mitjançant una distribució de probabilitat. A més, la tesi també demostra que, sota les distribucions de probabilitat correctes, el marc de treball desenvolupat condueix a la solució òptima del problema, segons diverses mesures d'avaluació utilitzades tradicionalment en el camp. Després, diferents models estadístics s'utilitzen per representar les distribucions necessàries: Xarxes Neuronal Recurrents i Models Ocults de Markov. Els paràmetres d'aquests són estimats a partir de dades d'entrenament, i les corresponents distribucions són representades mitjançant Transductors d'Estats Finits amb Pesos (Weighted Finite State Transducers). Amb l'objectiu de fer el marc de treball útil per a grans col·leccions de documents, es presenten distints algorismes per construir índexs de paraules a partir dels models probabilístics, tan basats en un lèxic tancat com en un obert. Aquests índexs són molt semblants als utilitzats per motors de cerca tradicionals. A més a més, s'estudia la relació que hi ha entre la formulació probabilística presentada i altres mètodes de gran influència en el camp de la detecció de paraules clau, destacant algunes limitacions dels segons. Finalment, totes les aportacions s'avaluen de forma experimental, no sols utilitzant proves acadèmics estàndard, sinó també en col·leccions amb desenes de milers de pàgines provinents de manuscrits històrics. Els resultats mostren que el marc de treball presentat permet construir sistemes de detecció de paraules clau molt acurats i ràpids, amb una sòlida base teòrica.
[EN] Keyword Spotting, applied to handwritten text documents, aims to retrieve the documents, or parts of them, that are relevant for a query, given by the user, within a large collection of documents. The topic has gained a large interest in the last 20 years among Pattern Recognition researchers, as well as digital libraries and archives. This thesis, first defines the goal of Keyword Spotting from a Decision Theory perspective. Then, the problem is tackled following a probabilistic formulation. More precisely, Keyword Spotting is presented as a particular instance of Information Retrieval, where the content of the documents is unknown, but can be modeled by a probability distribution. In addition, the thesis also proves that, under the correct probability distributions, the framework provides the optimal solution, under many of the evaluation measures traditionally used in the field. Later, different statistical models are used to represent the probability distribution over the content of the documents. These models, Hidden Markov Models or Recurrent Neural Networks, are estimated from training data, and the corresponding distributions over the transcripts of the images can be efficiently represented using Weighted Finite State Transducers. In order to make the framework practical for large collections of documents, this thesis presents several algorithms to build probabilistic word indexes, using both lexicon-based and lexicon-free models. These indexes are very similar to the ones used by traditional search engines. Furthermore, we study the relationship between the presented formulation and other seminal approaches in the field of Keyword Spotting, highlighting some limitations of the latter. Finally, all the contributions are evaluated experimentally, not only on standard academic benchmarks, but also on collections including tens of thousands of pages of historical manuscripts. The results show that the proposed framework and algorithms allow to build very accurate and very fast Keyword Spotting systems, with a solid underlying theory.
Puigcerver I Pérez, J. (2018). A Probabilistic Formulation of Keyword Spotting [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/116834
TESIS
Zougris, Konstantinos. "Sociological Applications of Topic Extraction Techniques: Two Case Studies." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804982/.
Full textMoens, Marie-Francine. "Automatic indexing and abstracting of document texts /." Boston, Mass. [u.a.] : Kluwer Academic Publ, 2000. http://www.loc.gov/catdir/enhancements/fy0820/00020394-d.html.
Full textDang, Quoc Bao. "Information spotting in huge repositories of scanned document images." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS024/document.
Full textThis work aims at developing a generic framework which is able to produce camera-based applications of information spotting in huge repositories of heterogeneous content document images via local descriptors. The targeted systems may take as input a portion of an image acquired as a query and the system is capable of returning focused portion of database image that match the query best. We firstly propose a set of generic feature descriptors for camera-based document images retrieval and spotting systems. Our proposed descriptors comprise SRIF, PSRIF, DELTRIF and SSKSRIF that are built from spatial space information of nearest keypoints around a keypoints which are extracted from centroids of connected components. From these keypoints, the invariant geometrical features are considered to be taken into account for the descriptor. SRIF and PSRIF are computed from a local set of m nearest keypoints around a keypoint. While DELTRIF and SSKSRIF can fix the way to combine local shape description without using parameter via Delaunay triangulation formed from a set of keypoints extracted from a document image. Furthermore, we propose a framework to compute the descriptors based on spatial space of dedicated keypoints e.g SURF or SIFT or ORB so that they can deal with heterogeneous-content camera-based document image retrieval and spotting. In practice, a large-scale indexing system with an enormous of descriptors put the burdens for memory when they are stored. In addition, high dimension of descriptors can make the accuracy of indexing reduce. We propose three robust indexing frameworks that can be employed without storing local descriptors in the memory for saving memory and speeding up retrieval time by discarding distance validating. The randomized clustering tree indexing inherits kd-tree, kmean-tree and random forest from the way to select K dimensions randomly combined with the highest variance dimension from each node of the tree. We also proposed the weighted Euclidean distance between two data points that is computed and oriented the highest variance dimension. The secondly proposed hashing relies on an indexing system that employs one simple hash table for indexing and retrieving without storing database descriptors. Besides, we propose an extended hashing based method for indexing multi-kinds of features coming from multi-layer of the image. Along with proposed descriptors as well indexing frameworks, we proposed a simple robust way to compute shape orientation of MSER regions so that they can combine with dedicated descriptors (e.g SIFT, SURF, ORB and etc.) rotation invariantly. In the case that descriptors are able to capture neighborhood information around MSER regions, we propose a way to extend MSER regions by increasing the radius of each region. This strategy can be also applied for other detected regions in order to make descriptors be more distinctive. Moreover, we employed the extended hashing based method for indexing multi-kinds of features from multi-layer of images. This system are not only applied for uniform feature type but also multiple feature types from multi-layers separated. Finally, in order to assess the performances of our contributions, and based on the assessment that no public dataset exists for camera-based document image retrieval and spotting systems, we built a new dataset which has been made freely and publicly available for the scientific community. This dataset contains portions of document images acquired via a camera as a query. It is composed of three kinds of information: textual content, graphical content and heterogeneous content
Gzawi, Mahmoud. "Désambiguïsation de l’arabe écrit et interprétation sémantique." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE2006.
Full textThis thesis lies at the frontier of the fields of linguistic research and the automatic processing of language. These two fields intersect for the construction of natural language processing tools, and industrial applications integrating solutions for disambiguation and interpretation of texts.A challenging task, briefly approached and applied, has come to the work of the Techlimed company, that of the automatic analysis of texts written in Arabic. Novel resources have emerged as language lexicons and semantic networks allowing the creation of formal grammars to accomplish this task.An important meta-data for text analysis is "what is being said, and what does it mean". The field of computational linguistics offers very diverse and, mostly, partial methods to allow the computer to answer such questions.The main purpose of this thesis is to introduce and apply the rules of descriptive language grammar in formal languages specific to computer language processing.Beyond the realization of a system of processing and interpretation of texts in Arabic language based on computer modeling, our interest has been devoted to the evaluation of the linguistic phenomena described by the literature and the methods of their formalization in computer science.In all cases, our research was tested and validated in a rigorous experimental framework around several formalisms and computer tools.The experiments concerning the contribution of syntaxico-semantic grammar, a priori, have demonstrated a significant reduction of linguistic ambiguity in the case of the use of a finite-state grammar written in Java and a transformational generative grammarwritten in Prolog, integrating morphological, syntactic and semantic components.The implementation of our study required the construction of tools for word processing, information retrieval tools. These tools were built by us and are available in Open-source.The success of the application of our work in large scale was concluded by the requirement of having rich and comprehensive semantic resources. Our work has been redirected towards a process of production of such resources, in terms of informationretrieval and knowledge extraction. The tests for this new perspective were favorable to further research and experimentation
Pohlídal, Antonín. "Inteligentní emailová schránka." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236458.
Full textWu, Zimin. "A partial syntactic analysis-based pre-processor for automatic indexing and retrieval of Chinese texts." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/13685.
Full textBalgar, Marek. "Vyhledávání informací v české Wikipedii." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412831.
Full textALVES, George Marcelo Rodrigues. "RISO - GCT - Determinação do contexto temporal de conceitos em textos." Universidade Federal de Campina Grande, 2016. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/469.
Full textMade available in DSpace on 2018-04-24T12:36:47Z (GMT). No. of bitstreams: 1 GEORGE MARCELO RODRIGUES ALVES - DISSERTAÇÃO (PPGCC) 2016.pdf: 2788195 bytes, checksum: 45c2b3c7089a4adbd7443b1c08cd4881 (MD5) Previous issue date: 2016-02-26
Devido ao crescimento constante da quantidade de textos disponíveis na Web, existe uma necessidade de catalogar estas informações que surgem a cada instante. No entanto, trata-se de uma tarefa árdua e na qual seres humanos são incapazes de realizar esta tarefa de maneira manual, tendo em vista a quantidade incontável de dados que são disponibilizados a cada segundo. Inúmeras pesquisas têm sido realizadas no intuito de automatizar este processo de catalogação. Uma vertente de grande utilidade para as várias áreas do conhecimento humano é a indexação de documentos com base nos contextos temporais presentes nestes documentos. Esta não é uma tarefa trivial, pois envolve a análise de informações não estruturadas presentes em linguagem natural, disponíveis nos mais diversos idiomas, dentre outras dificuldades. O objetivo principal deste trabalho é criar uma abordagem capaz de permitir a indexação de documentos, determinando mapas de tópicos enriquecidos com conceitos e as respectivas informações temporais relacionadas. Tal abordagem deu origem ao RISO-GCT (Geração de Contextos Temporais), componente do Projeto RISO (Recuperação da Informação Semântica de Objetos Textuais), que tem como objetivo criar um ambiente de indexação e recuperação semântica de documentos possibilitando uma recuperação mais acurada. O RISO-GCT utilizou os resultados de um módulo preliminar, o RISO-TT (Temporal Tagger), responsável por etiquetar informações temporais presentes em documentos e realizar o processo de normalização das expressões temporais encontradas. Deste processo foi aperfeiçoada a abordagem responsável pela normalização de expressões temporais, para que estas possam ser manipuladas mais facilmente na determinação dos contextos temporais. . Foram realizados experimentos para avaliar a eficácia da abordagem proposta nesta pesquisa. O primeiro, com o intuito de verificar se o Topic Map previamente criado pelo RISO-IC (Indexação Conceitual), foi enriquecido com as informações temporais relacionadas aos conceitos de maneira correta e o segundo, para analisar a eficácia da abordagem de normalização das expressões temporais extraídas de documentos. Os experimentos concluíram que tanto o RISO-GCT, quanto o RISO-TT incrementado obtiveram resultados superiores aos concorrentes.
Due to the constant growth of the number of texts available on the Web, there is a need to catalog that information which appear at every moment. However, it is an arduous task in which humans are unable to perform this task manually, given the increased amount of data available at every second. Numerous studies have been conducted in order to automate the cataloging process. A research line with utility for various areas of human knowledge is the indexing of documents based on temporal contexts present in these documents. This is not a trivial task, as it involves the analysis of unstructured information present in natural language, available in several languages, among other difficulties. The main objective of this work is to create a model to allow indexing of documents, creating topic maps enriched with the concepts in text and their related temporal information. This approach led to the RISO-GCT (Temporal Contexts Generation), a part of RISO Project (Semantic Information Retrieval on Text Objects), which aims to create a semantic indexing environment and retrieval of documents, enabling a more accurate recovery. RISO-GCT uses the results of a preliminary module, the RISO-TT (Temporal Tagger) responsible the labeling temporal information contained in documents and carrying out the process of normalization of temporal expressions. Found. In this module the normalization of temporal expressions has been improved, in order allow a richer temporal context determination. Experiments were conducted to evaluate the effectiveness of the approach proposed a in this research. The first, in order to verify that the topic map previously created by RISO-IC has been correctly enriched with temporal information related to the concepts correctly, and the second, to analyze the effectiveness of the normalization of expressions extracted from documents. The experiments concluded that both the RISO-GCT, as the RISO-TT, which was evolved during this work, obtained better results than similar tools.
Wang, Juo-Wen, and 汪若文. "Automatic Classification of Text Documents by Using Latent Semantic Indexing." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/09421240911724157604.
Full text國立交通大學
管理學院碩士在職專班資訊管理組
92
Search and browse are both important tasks in information retrieval. Search provides a way to find information rapidly, but relying on words makes it hard to deal with the problems of synonym and polysemy. Besides, users sometimes cannot provide suitable query and cannot find the information they really need. To provide good information services, the service of browse through good classification mechanism as well as information search are very important. There are two steps in classifying documents. The first is to present documents in suitable mathematical forms. The second is to classify documents automatically by using suitable classification algorithms. Classification is a task of conceptualization. Presenting documents in conventional vector space model cannot avoid relying on words explicitly. Latent semantic indexing (LSI) is developed to find the semantic concept of document, which may be suitable for the classification of documents. This thesis is intended to study the feasibility and effect of the classification of text documents by using LSI as the presentation of documents, and using both centroid vector and k-NN as the classification algorithms. The results are compared to those of the vector space model. This study deals with the problem of one-category classification. The results show that automatic classification of text documents by using LSI along with suitable classification algorithms is feasible. But the accuracy of classification by using LSI is not as good as by using vector space model. The effect of applying LSI on multi-category classification and the effect of combining LSI with other classification algorithms need further studies.
Golynski, Alexander. "Upper and Lower Bounds for Text Upper and Lower Bounds for Text Indexing Data Structures." Thesis, 2007. http://hdl.handle.net/10012/3509.
Full textMaaß, Moritz G. [Verfasser]. "Analysis of algorithms and data structures for text indexing / Moritz G. Maaß." 2006. http://d-nb.info/985174366/34.
Full textChang, Yu-Jen, and 張佑任. "A Research of Performance Evaluation of Mandarin Chinese Full-Text Information Retrieval--Full-Text Scan Model vs. Cluster Indexing Model." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/31589747547714555995.
Full text南華大學
資訊管理學系碩士班
90
Full-Text Information Retrieval is becoming an interdisciplinary interest. Mandarin Chinese Full-Text Information Retrieval is facing more basic difficulties than English context because of research lag and language nature. Lack of an objective test collection and a standard effectiveness evaluation for information retrieval experiments is the fundamental issue for Mandarin Chinese Full-Text information retrieval. In this thesis, we will introduce two different systems, including the Chinese Text Processor (CTP) developed by Academia Sinica in 1996, and the Cluster Indexing Model (CIM) developed by Huang Yun-Long in 1997. Also we will use same corpus (documents set), to evaluate system performance. Concerning the research status in Chinese, this research will have three contributions. First, analysis the fitness method of Full-Text Information Retrieval in same corpus or documents set. Second, developing a mature Cluster Indexing Model as the fundamental of advance application researches. Finally, this project will construct test collections and a standard effectiveness evaluation for Full-Text Information Retrieval researches in Chinese. Involving with medicine of Children’s Daily News (502 documents) and 21 queries. Under a series of experiments, the following conclusions are discovered: 1.The average recall of CTP is 99.02%, and its average precision is 17.72%. 2.In automatic term segmentation methods, under index dimension 100 and similarity threshold 0.3: (1)The recall of CIM-IDF is 80.73%, and the precision is 45.09%. (2)The recall of CIM-TF is 65.97%, and the precision is 43.52%. 3.In manual term segmentation methods, under index dimension 100 and similarity threshold 0.3: (1)The recall of CIM-IDF is 82.81%, and the precision is 47.11%. (2)The recall of CIM—TF is 64.81%, and the precision is 42.72%. 4.According to the results of above experiments, the following conclusions are discovered: (1)The performance of CIM-IDF is better than CTP in automatic and manual term segmentation. (2)The performance of CIM-IDF is better than CIM—TF in automatic and manual term segmentation. (3)In CIM-IDF, when index dimension greater than 80, the results show that the performance of automatic and manual term segmentation are similar. It showed clearly that automatic term segmentation methods could substitute for manual. Many researchers have devoted to developing information retrieval systems for a long time. They are find new ways of doing things from different theories and improve system of performance, but not any one system can by satisfy. However, The IR system should support different retrieval models, and relevance feedback can use to differ model in the future. Besides, research has involved many topics for discussion in Mandarin Chinese Full-Text information retrieval. However, it was lack of effectiveness evaluation in diverse information retrieval. If research could construct a standard of evaluation environment (ex. large corpus, query, relevance judgment, and a standard of evaluation), it will improve system of performance to contributive.
Murfi, Hendri [Verfasser]. "Machine learning for text indexing : concept extraction, keyword extraction and tag recommendation / vorgelegt von Hendri Murfi." 2010. http://d-nb.info/1009119486/34.
Full textQiu, Jun-Feng, and 邱俊逢. "Implementation of Web-based Files Management System and Network Spy Agent With Full-Text Indexing Capability." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/94513863253063555474.
Full text國立高雄第一科技大學
電腦與通訊工程所
91
Among internet environment, more and more FTP servers have been widely adopted by variety. Especially in filing management, along with the increased number of files stored, the FTP server could not support the keyword search function will limited the management. In this study, we has proposed web-based file server which will used ActiveX Control technology to implement the file management on the browser. We also used Microsoft’s Index Server to build a full-text retrieving function, user can use keyword to search files. In our system, we used ASP, JavaScript, CSS, ADSI, ActiveX, SQL, VB, API, Index Server,IIS, and so on. On the side, we design a network spy agent server by the above-mentioned techniques, the network spy agent can search the FTP server file by user’s keyword, it will spend less manpower and time. After the network spy agent search files, the result’s file will be saved on our files management server, user can detemine to reserve the file or not.
Huang, Yun-Long, and 黃雲龍. "A Theoretic Research of Cluster Indexing for Mandarin Chinese Full Text Document--The Construction of Vector Space Model." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/31705905316420373533.
Full textGupta, Ankur. "Succinct Data Structures." Diss., 2007. http://hdl.handle.net/10161/434.
Full text"Automatic index generation for the free-text based database." Chinese University of Hong Kong, 1992. http://library.cuhk.edu.hk/record=b5887040.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1992.
Includes bibliographical references (leaves 183-184).
Chapter Chapter one: --- Introduction --- p.1
Chapter Chapter two: --- Background knowledge and linguistic approaches of automatic indexing --- p.5
Chapter 2.1 --- Definition of index and indexing --- p.5
Chapter 2.2 --- Indexing methods and problems --- p.7
Chapter 2.3 --- Automatic indexing and human indexing --- p.8
Chapter 2.4 --- Different approaches of automatic indexing --- p.10
Chapter 2.5 --- Example of semantic approach --- p.11
Chapter 2.6 --- Example of syntactic approach --- p.14
Chapter 2.7 --- Comments on semantic and syntactic approaches --- p.18
Chapter Chapter three: --- Rationale and methodology of automatic index generation --- p.19
Chapter 3.1 --- Problems caused by natural language --- p.19
Chapter 3.2 --- Usage of word frequencies --- p.20
Chapter 3.3 --- Brief description of rationale --- p.24
Chapter 3.4 --- Automatic index generation --- p.27
Chapter 3.4.1 --- Training phase --- p.27
Chapter 3.4.1.1 --- Selection of training documents --- p.28
Chapter 3.4.1.2 --- Control and standardization of variants of words --- p.28
Chapter 3.4.1.3 --- Calculation of associations between words and indexes --- p.30
Chapter 3.4.1.4 --- Discarding false associations --- p.33
Chapter 3.4.2 --- Indexing phase --- p.38
Chapter 3.4.3 --- Example of automatic indexing --- p.41
Chapter 3.5 --- Related researches --- p.44
Chapter 3.6 --- Word diversity and its effect on automatic indexing --- p.46
Chapter 3.7 --- Factors affecting performance of automatic indexing --- p.60
Chapter 3.8 --- Application of semantic representation --- p.61
Chapter 3.8.1 --- Problem of natural language --- p.61
Chapter 3.8.2 --- Use of concept headings --- p.62
Chapter 3.8.3 --- Example of using concept headings in automatic indexing --- p.65
Chapter 3.8.4 --- Advantages of concept headings --- p.68
Chapter 3.8.5 --- Disadvantages of concept headings --- p.69
Chapter 3.9 --- Correctness prediction for proposed indexes --- p.78
Chapter 3.9.1 --- Example of using index proposing rate --- p.80
Chapter 3.10 --- Effect of subject matter on automatic indexing --- p.83
Chapter 3.11 --- Comparison with other indexing methods --- p.85
Chapter 3.12 --- Proposal for applying Chinese medical knowledge --- p.90
Chapter Chapter four: --- Simulations of automatic index generation --- p.93
Chapter 4.1 --- Training phase simulations --- p.93
Chapter 4.1.1 --- Simulation of association calculation (word diversity uncontrolled) --- p.94
Chapter 4.1.2 --- Simulation of association calculation (word diversity controlled) --- p.102
Chapter 4.1.3 --- Simulation of discarding false associations --- p.107
Chapter 4.2 --- Indexing phase simulation --- p.115
Chapter 4.3 --- Simulation of using concept headings --- p.120
Chapter 4.4 --- Simulation for testing performance of predicting index correctness --- p.125
Chapter 4.5 --- Summary --- p.128
Chapter Chapter five: --- Real case study in database of Chinese Medicinal Material Research Center --- p.130
Chapter 5.1 --- Selection of real documents --- p.130
Chapter 5.2 --- Case study one: Overall performance using real data --- p.132
Chapter 5.2.1 --- Sample results of automatic indexing for real documents --- p.138
Chapter 5.3 --- Case study two: Using multi-word terms --- p.148
Chapter 5.4 --- Case study three: Using concept headings --- p.152
Chapter 5.5 --- Case study four: Prediction of proposed index correctness --- p.156
Chapter 5.6 --- Case study five: Use of (Σ ΔRij) Fi to determine false association --- p.159
Chapter 5.7 --- Case study six: Effect of word diversity --- p.162
Chapter 5.8 --- Summary --- p.166
Chapter Chapter six: --- Conclusion --- p.168
Appendix A: List of stopwords --- p.173
Appendix B: Index terms used in case studies --- p.174
References --- p.183
Tomeš, Jiří. "Indexace elektronických dokumentů a jejich částí." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-352314.
Full textFishbein, Jonathan Michael. "Integrating Structure and Meaning: Using Holographic Reduced Representations to Improve Automatic Text Classification." Thesis, 2008. http://hdl.handle.net/10012/3819.
Full textCerdeirinha, João Manuel Macedo. "Recuperação de imagens digitais com base no conteúdo: estudo na Biblioteca de Arte e Arquivos da Fundação Calouste Gulbenkian." Master's thesis, 2019. http://hdl.handle.net/10362/91474.
Full textThe massive growth of multimedia data on the Internet and the emergence of new sharing platforms created major challenges for information retrieval. The limitations of text-based searches for this type of content have led to the development of a content-based information retrieval approach that has received increasing attention in recent decades. Taking into account the research carried out in this area, and digital images being the focus of this research, concepts and techniques associated with this approach are explored through a theoretical survey that reports the evolution of information retrieval and the importance that this subject has for Information Management and Curation. In the context of the systems that have been developed using automatic indexing, the various applications of this type of process are indicated. Available CBIR tools are also identified for a case study of the application of this type of image retrieval in the context of the Art Library and Archives of the Calouste Gulbenkian Foundation and the photographic collections that it holds in its resources, considering the particularities of the institution to which they belong. For the intended demonstration and according to the established criteria, online CBIR tools were initially used and, in the following phase, locally installed software was selected to search and retrieve in a specific collection. Through this case study, the strengths and weaknesses of content-based image retrieval are attested against the more traditional approach based on textual metadata currently in use in these collections. Taking into consideration the needs of users of the systems in which these digital objects are indexed, combining these techniques may lead to more satisfactory results.
Seo, Eun-Gyoung. "An experiment in automatic indexing with Korean texts a comparison of syntactico-statistical and manual methods /." 1993. http://books.google.com/books?id=jTlkAAAAMAAJ.
Full text