Dissertations / Theses on the topic 'Cross lingual information retrieval'

To see the other types of publications on this topic, follow the link: Cross lingual information retrieval.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cross lingual information retrieval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Qing. "A Neural Approach to Cross-Lingual Information Retrieval." Research Showcase @ CMU, 2018. http://repository.cmu.edu/theses/135.

Full text
Abstract:
With the rapid growth of world-wide information accessibility, cross-language information retrieval (CLIR) has become a prominent concern for search engines. Traditional CLIR technologies require special purpose components and need high quality translation knowledge (e.g. machine readable dictionaries, machine translation systems) and careful tuning to achieve high ranking performance. However, with the help of a neural network architecture, it’s possible to solve CLIR problem without extra tuning or special components. This work proposes a bilingual training approach, a neural CLIR solution allowing automatic learning of translation relationships from noisy translation knowledge. External sources of translation knowledge are used to generate bilingual training data then the bilingual training data is fed into a kernel based neural ranking model. During the end-to-end training, word embeddings are tuned to preserve translation relationships between bilingual word pairs and also tailored for the ranking task. In experiments we show that the bilingual training approach outperforms traditional CLIR techniques given the same external translation knowledge source and it’s able to yield ranking results as good as that of a monolingual information retrieval system. In experiments we investigate the source of effectiveness for our neural CLIR approach by analyzing the pattern of trained word embeddings. Also, possible methods to further improve performance are explored in experiments, including cleaning training data by removing ambiguous training queries, exploring whether more training data will improve the performance by learning the relationship between training dataset size and model performance, and investigating the affect of English queries’ text-transform in training data. Lastly, we design an experiment that analyzes the quality of testing query translation to quantify the model performance in a real testing scenario where model takes manually written English queries as input.
APA, Harvard, Vancouver, ISO, and other styles
2

陸穎剛 and Wing-kong Luk. "Concept space approach for cross-lingual information retrieval." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30147724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Luk, Wing-kong. "Concept space approach for cross-lingual information retrieval /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2275345X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boynuegri, Akif. "Cross-lingual Information Retrieval On Turkish And English Texts." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611903/index.pdf.

Full text
Abstract:
In this thesis, cross-lingual information retrieval (CLIR) approaches are comparatively evaluated for Turkish and English texts. As a complementary study, knowledge-based methods for word sense disambiguation (WSD), which is one of the most important parts of the CLIR studies, are compared for Turkish words. Query translation and sense indexing based CLIR approaches are used in this study. In query translation approach, we use automatic and manual word sense disambiguation methods and Google translation service during translation of queries. In sense indexing based approach, documents are indexed according to meanings of words instead of words themselves. Retrieval of documents is performed according to meanings of the query words as well. During the identification of intended meaning of query terms, manual and automatic word sense disambiguation methods are used and compared to each other. Knowledge based WSD methods that use different gloss enrichment techniques are compared for Turkish words. Turkish WordNet is used as a primary knowledge base and English WordNet and Turkish Wikipedia are employed as enrichment resources. Meanings of words are more clearly identified by using semantic relations defined in WordNets and Turkish Wikipedia. Also, during calculation of semantic relatedness of senses, cosine similarity metric is used as an alternative metric to word overlap count. Effects of using cosine similarity metric are observed for each WSD methods that use different knowledge bases.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xinkai. "Chinese-English cross-lingual information retrieval in biomedicine using ontology-based query expansion." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/chineseenglish-crosslingual-information-retrieval-in-biomedicine-using-ontologybased-query-expansion(1b7443d3-3baf-402b-83bb-f45e78876404).html.

Full text
Abstract:
In this thesis, we propose a new approach to Chinese-English Biomedical cross-lingual information retrieval (CLIR) using query expansion based on the eCMeSH Tree, a Chinese-English ontology extended from the Chinese Medical Subject Headings (CMeSH) Tree. The CMeSH Tree is not designed for information retrieval (IR), since it only includes heading terms and has no term weighting scheme for these terms. Therefore, we design an algorithm, which employs a rule-based parsing technique combined with the C-value term extraction algorithm and a filtering technique based on mutual information, to extract Chinese synonyms for the corresponding heading terms. We also develop a term-weighting mechanism. Following the hierarchical structure of CMeSH, we extend the CMeSH Tree to the eCMeSH Tree with synonymous terms and their weights. We propose an algorithm to implement CLIR using the eCMeSH Tree terms to expand queries. In order to evaluate the retrieval improvements obtained from our approach, the results of the query expansion based on the eCMeSH Tree are individually compared with the results of the experiments of query expansion using the CMeSH Tree terms, query expansion using pseudo-relevance feedback, and document translation. We also evaluate the combinations of these three approaches. This study also investigates the factors which affect the CLIR performance, including a stemming algorithm, retrieval models, and word segmentation.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Farag [Verfasser], and Andreas [Akademischer Betreuer] Nürnberger. "Meaning refinement to improve cross-lingual information retrieval / Farag Ahmed. Betreuer: Andreas Nürnberger." Magdeburg : Universitätsbibliothek, 2012. http://d-nb.info/1047596040/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Farag Verfasser], and Andreas [Akademischer Betreuer] [Nürnberger. "Meaning refinement to improve cross-lingual information retrieval / Farag Ahmed. Betreuer: Andreas Nürnberger." Magdeburg : Universitätsbibliothek, 2012. http://nbn-resolving.de/urn:nbn:de:gbv:ma9:1-730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Ling-Xiang. "Link discovery for Chinese/English cross-language web information retrieval." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/58416/1/Ling-Xiang_Tang_Thesis.pdf.

Full text
Abstract:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
APA, Harvard, Vancouver, ISO, and other styles
9

Asian, Jelita, and jelitayang@gmail com. "Effective Techniques for Indonesian Text Retrieval." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080110.084651.

Full text
Abstract:
The Web is a vast repository of data, and information on almost any subject can be found with the aid of search engines. Although the Web is international, the majority of research on finding of information has a focus on languages such as English and Chinese. In this thesis, we investigate information retrieval techniques for Indonesian. Although Indonesia is the fourth most populous country in the world, little attention has been given to search of Indonesian documents. Stemming is the process of reducing morphological variants of a word to a common stem form. Previous research has shown that stemming is language-dependent. Although several stemming algorithms have been proposed for Indonesian, there is no consensus on which gives better performance. We empirically explore these algorithms, showing that even the best algorithm still has scope for improvement. We propose novel extensions to this algorithm and develop a new Indonesian stemmer, and show that these can improve stemming correctness by up to three percentage points; our approach makes less than one error in thirty-eight words. We propose a range of techniques to enhance the performance of Indonesian information retrieval. These techniques include: stopping; sub-word tokenisation; and identification of proper nouns; and modifications to existing similarity functions. Our experiments show that many of these techniques can increase retrieval performance, with the highest increase achieved when we use grams of size five to tokenise words. We also present an effective method for identifying the language of a document; this allows various information retrieval techniques to be applied selectively depending on the language of target documents. We also address the problem of automatic creation of parallel corpora --- collections of documents that are the direct translations of each other --- which are essential for cross-lingual information retrieval tasks. Well-curated parallel corpora are rare, and for many languages, such as Indonesian, do not exist at all. We describe algorithms that we have developed to automatically identify parallel documents for Indonesian and English. Unlike most current approaches, which consider only the context and structure of the documents, our approach is based on the document content itself. Our algorithms do not make any prior assumptions about the documents, and are based on the Needleman-Wunsch algorithm for global alignment of protein sequences. Our approach works well in identifying Indonesian-English parallel documents, especially when no translation is performed. It can increase the separation value, a measure to discriminate good matches of parallel documents from bad matches, by approximately ten percentage points. We also investigate the applicability of our identification algorithms for other languages that use the Latin alphabet. Our experiments show that, with minor modifications, our alignment methods are effective for English-French, English-German, and French-German corpora, especially when the documents are not translated. Our technique can increase the separation value for the European corpus by up to twenty-eight percentage points. Together, these results provide a substantial advance in understanding techniques that can be applied for effective Indonesian text retrieval.
APA, Harvard, Vancouver, ISO, and other styles
10

Saad, Motaz. "Fouille de documents et d'opinions multilingue." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0003/document.

Full text
Abstract:
L’objectif de cette thèse est d’étudier les sentiments dans les documents comparables. Premièrement, nous avons recueillis des corpus comparables en anglais, français et arabe de Wikipédia et d’Euronews, et nous avons aligné ces corpus au niveau document. Nous avons en plus collecté des documents d’informations des agences de presse locales et étrangères dans les langues anglaise et arabe. Les documents en anglais ont été recueillis du site de la BBC, ceux en arabe du site d’Al-Jazzera. Deuxièmement, nous avons présenté une mesure de similarité cross-linguistique des documents dans le but de récupérer et aligner automatiquement les documents comparables. Ensuite, nous avons proposé une méthode d’annotation cross-linguistique en termes de sentiments, afin d’étiqueter les documents source et cible avec des sentiments. Enfin, nous avons utilisé des mesures statistiques pour comparer l’accord des sentiments entre les documents comparables source et cible. Les méthodes présentées dans cette thèse ne dépendent pas d’une paire de langue bien déterminée, elles peuvent être appliquées sur toute autre couple de langue
The aim of this thesis is to study sentiments in comparable documents. First, we collect English, French and Arabic comparable corpora from Wikipedia and Euronews, and we align each corpus at the document level. We further gather English-Arabic news documents from local and foreign news agencies. The English documents are collected from BBC website and the Arabic documents are collected from Al-jazeera website. Second, we present a cross-lingual document similarity measure to automatically retrieve and align comparable documents. Then, we propose a cross-lingual sentiment annotation method to label source and target documents with sentiments. Finally, we use statistical measures to compare the agreement of sentiments in the source and the target pair of the comparable documents. The methods presented in this thesis are language independent and they can be applied on any language pair
APA, Harvard, Vancouver, ISO, and other styles
11

Pollettini, Juliana Tarossi. "Auxílio na prevenção de doenças crônicas por meio de mapeamento e relacionamento conceitual de informações em biomedicina." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/95/95131/tde-24042012-223141/.

Full text
Abstract:
Pesquisas recentes em medicina genômica sugerem que fatores de risco que incidem desde a concepção de uma criança até o final de sua adolescência podem influenciar no desenvolvimento de doenças crônicas da idade adulta. Artigos científicos com descobertas e estudos inovadores sobre o tema indicam que a epigenética deve ser explorada para prevenir doenças de alta prevalência como doenças cardiovasculares, diabetes e obesidade. A grande quantidade de artigos disponibilizados diariamente dificulta a atualização de profissionais, uma vez que buscas por informação exata se tornam complexas e dispendiosas em relação ao tempo gasto na procura e análise dos resultados. Algumas tecnologias e técnicas computacionais podem apoiar a manipulação dos grandes repositórios de informações biomédicas, assim como a geração de conhecimento. O presente trabalho pesquisa a descoberta automática de artigos científicos que relacionem doenças crônicas e fatores de risco para as mesmas em registros clínicos de pacientes. Este trabalho também apresenta o desenvolvimento de um arcabouço de software para sistemas de vigilância que alertem profissionais de saúde sobre problemas no desenvolvimento humano. A efetiva transformação dos resultados de pesquisas biomédicas em conhecimento possível de ser utilizado para beneficiar a saúde pública tem sido considerada um domínio importante da informática. Este domínio é denominado Bioinformática Translacional (BUTTE,2008). Considerando-se que doenças crônicas são, mundialmente, um problema sério de saúde e lideram as causas de mortalidade com 60% de todas as mortes, o presente trabalho poderá possibilitar o uso direto dos resultados dessas pesquisas na saúde pública e pode ser considerado um trabalho de Bioinformática Translacional.
Genomic medicine has suggested that the exposure to risk factors since conception may influence gene expression and consequently induce the development of chronic diseases in adulthood. Scientific papers bringing up these discoveries indicate that epigenetics must be exploited to prevent diseases of high prevalence, such as cardiovascular diseases, diabetes and obesity. A large amount of scientific information burdens health care professionals interested in being updated, once searches for accurate information become complex and expensive. Some computational techniques might support management of large biomedical information repositories and discovery of knowledge. This study presents a framework to support surveillance systems to alert health professionals about human development problems, retrieving scientific papers that relate chronic diseases to risk factors detected on a patient\'s clinical record. As a contribution, healthcare professionals will be able to create a routine with the family, setting up the best growing conditions. According to Butte, the effective transformation of results from biomedical research into knowledge that actually improves public health has been considered an important domain of informatics and has been called Translational Bioinformatics. Since chronic diseases are a serious health problem worldwide and leads the causes of mortality with 60% of all deaths, this scientific investigation will probably enable results from bioinformatics researches to directly benefit public health.
APA, Harvard, Vancouver, ISO, and other styles
12

Saad, Motaz. "Fouille de documents et d'opinions multilingue." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0003.

Full text
Abstract:
L’objectif de cette thèse est d’étudier les sentiments dans les documents comparables. Premièrement, nous avons recueillis des corpus comparables en anglais, français et arabe de Wikipédia et d’Euronews, et nous avons aligné ces corpus au niveau document. Nous avons en plus collecté des documents d’informations des agences de presse locales et étrangères dans les langues anglaise et arabe. Les documents en anglais ont été recueillis du site de la BBC, ceux en arabe du site d’Al-Jazzera. Deuxièmement, nous avons présenté une mesure de similarité cross-linguistique des documents dans le but de récupérer et aligner automatiquement les documents comparables. Ensuite, nous avons proposé une méthode d’annotation cross-linguistique en termes de sentiments, afin d’étiqueter les documents source et cible avec des sentiments. Enfin, nous avons utilisé des mesures statistiques pour comparer l’accord des sentiments entre les documents comparables source et cible. Les méthodes présentées dans cette thèse ne dépendent pas d’une paire de langue bien déterminée, elles peuvent être appliquées sur toute autre couple de langue
The aim of this thesis is to study sentiments in comparable documents. First, we collect English, French and Arabic comparable corpora from Wikipedia and Euronews, and we align each corpus at the document level. We further gather English-Arabic news documents from local and foreign news agencies. The English documents are collected from BBC website and the Arabic documents are collected from Al-jazeera website. Second, we present a cross-lingual document similarity measure to automatically retrieve and align comparable documents. Then, we propose a cross-lingual sentiment annotation method to label source and target documents with sentiments. Finally, we use statistical measures to compare the agreement of sentiments in the source and the target pair of the comparable documents. The methods presented in this thesis are language independent and they can be applied on any language pair
APA, Harvard, Vancouver, ISO, and other styles
13

Abusalah, Mustafa A. "Cross language information retrieval using ontologies." Thesis, University of Sunderland, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505050.

Full text
Abstract:
The basic idea behind a Cross Language Information Retrieval (CLIR) system is to retrieve documents in a language different from the query. Therefore translation is needed before matching of query and document terms can take place. This translation process tends to cause a reduction in the retrieval effectiveness of CUR as compared to monolingual Information Retrieval systems. The research introduces a new CUR approach, by producing a unique CUR system based on multilingual Arabic/English ontologies; the ontology is used for query expansion and translation. Both Arabic and English ontologies are mapped using unique automatic ontology mapping tools that will be introduced in this study as well. This research addresses lexical ambiguity problems caused by erroneous translations. To prevent this, the study proposed developing a CUR system based on a multilingual ontology to create a mapping that will solve the lexical ambiguity problem. Also this study uses ontology semantic relations to expand the query to produce a better formulated query and gain better results. Finally a weighting algorithm is applied to the result set ofthe proposed system and results are compared to a state ofthe art baseline CUR system that uses a dictionary as a translation base. The CUR system was implemented in the travel domain and two ontologies were developed. A unique ontology mapping tool was also developed to map the two ontologies. The experimental work described consists of the design, development, and evaluation of the proposed CUR system. The evaluation of the proposed system demonstrates that the retrieval effectiveness outperformed the baseline system after running two human centered experiments. Relevancy judgments were measured and the results produced indicated that the proposed system is more effective than the baseline system.
APA, Harvard, Vancouver, ISO, and other styles
14

Gupta, Parth Alokkumar. "Cross-view Embeddings for Information Retrieval." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/78457.

Full text
Abstract:
In this dissertation, we deal with the cross-view tasks related to information retrieval using embedding methods. We study existing methodologies and propose new methods to overcome their limitations. We formally introduce the concept of mixed-script IR, which deals with the challenges faced by an IR system when a language is written in different scripts because of various technological and sociological factors. Mixed-script terms are represented by a small and finite feature space comprised of character n-grams. We propose the cross-view autoencoder (CAE) to model such terms in an abstract space and CAE provides the state-of-the-art performance. We study a wide variety of models for cross-language information retrieval (CLIR) and propose a model based on compositional neural networks (XCNN) which overcomes the limitations of the existing methods and achieves the best results for many CLIR tasks such as ad-hoc retrieval, parallel sentence retrieval and cross-language plagiarism detection. We empirically test the proposed models for these tasks on publicly available datasets and present the results with analyses. In this dissertation, we also explore an effective method to incorporate contextual similarity for lexical selection in machine translation. Concretely, we investigate a feature based on context available in source sentence calculated using deep autoencoders. The proposed feature exhibits statistically significant improvements over the strong baselines for English-to-Spanish and English-to-Hindi translation tasks. Finally, we explore the the methods to evaluate the quality of autoencoder generated representations of text data and analyse its architectural properties. For this, we propose two metrics based on reconstruction capabilities of the autoencoders: structure preservation index (SPI) and similarity accumulation index (SAI). We also introduce a concept of critical bottleneck dimensionality (CBD) below which the structural information is lost and present analyses linking CBD and language perplexity.
En esta disertación estudiamos problemas de vistas-múltiples relacionados con la recuperación de información utilizando técnicas de representación en espacios de baja dimensionalidad. Estudiamos las técnicas existentes y proponemos nuevas técnicas para solventar algunas de las limitaciones existentes. Presentamos formalmente el concepto de recuperación de información con escritura mixta, el cual trata las dificultades de los sistemas de recuperación de información cuando los textos contienen escrituras en distintos alfabetos debido a razones tecnológicas y socioculturales. Las palabras en escritura mixta son representadas en un espacio de características finito y reducido, compuesto por n-gramas de caracteres. Proponemos los auto-codificadores de vistas-múltiples (CAE, por sus siglas en inglés) para modelar dichas palabras en un espacio abstracto, y esta técnica produce resultados de vanguardia. En este sentido, estudiamos varios modelos para la recuperación de información entre lenguas diferentes (CLIR, por sus siglas en inglés) y proponemos un modelo basado en redes neuronales composicionales (XCNN, por sus siglas en inglés), el cual supera las limitaciones de los métodos existentes. El método de XCNN propuesto produce mejores resultados en diferentes tareas de CLIR tales como la recuperación de información ad-hoc, la identificación de oraciones equivalentes en lenguas distintas y la detección de plagio entre lenguas diferentes. Para tal efecto, realizamos pruebas experimentales para dichas tareas sobre conjuntos de datos disponibles públicamente, presentando los resultados y análisis correspondientes. En esta disertación, también exploramos un método eficiente para utilizar similitud semántica de contextos en el proceso de selección léxica en traducción automática. Específicamente, proponemos características extraídas de los contextos disponibles en las oraciones fuentes mediante el uso de auto-codificadores. El uso de las características propuestas demuestra mejoras estadísticamente significativas sobre sistemas de traducción robustos para las tareas de traducción entre inglés y español, e inglés e hindú. Finalmente, exploramos métodos para evaluar la calidad de las representaciones de datos de texto generadas por los auto-codificadores, a la vez que analizamos las propiedades de sus arquitecturas. Como resultado, proponemos dos nuevas métricas para cuantificar la calidad de las reconstrucciones generadas por los auto-codificadores: el índice de preservación de estructura (SPI, por sus siglas en inglés) y el índice de acumulación de similitud (SAI, por sus siglas en inglés). También presentamos el concepto de dimensión crítica de cuello de botella (CBD, por sus siglas en inglés), por debajo de la cual la información estructural se deteriora. Mostramos que, interesantemente, la CBD está relacionada con la perplejidad de la lengua.
En aquesta dissertació estudiem els problemes de vistes-múltiples relacionats amb la recuperació d'informació utilitzant tècniques de representació en espais de baixa dimensionalitat. Estudiem les tècniques existents i en proposem unes de noves per solucionar algunes de les limitacions existents. Presentem formalment el concepte de recuperació d'informació amb escriptura mixta, el qual tracta les dificultats dels sistemes de recuperació d'informació quan els textos contenen escriptures en diferents alfabets per motius tecnològics i socioculturals. Les paraules en escriptura mixta són representades en un espai de característiques finit i reduït, composat per n-grames de caràcters. Proposem els auto-codificadors de vistes-múltiples (CAE, per les seves sigles en anglès) per modelar aquestes paraules en un espai abstracte, i aquesta tècnica produeix resultats d'avantguarda. En aquest sentit, estudiem diversos models per a la recuperació d'informació entre llengües diferents (CLIR , per les sevas sigles en anglès) i proposem un model basat en xarxes neuronals composicionals (XCNN, per les sevas sigles en anglès), el qual supera les limitacions dels mètodes existents. El mètode de XCNN proposat produeix millors resultats en diferents tasques de CLIR com ara la recuperació d'informació ad-hoc, la identificació d'oracions equivalents en llengües diferents, i la detecció de plagi entre llengües diferents. Per a tal efecte, realitzem proves experimentals per aquestes tasques sobre conjunts de dades disponibles públicament, presentant els resultats i anàlisis corresponents. En aquesta dissertació, també explorem un mètode eficient per utilitzar similitud semàntica de contextos en el procés de selecció lèxica en traducció automàtica. Específicament, proposem característiques extretes dels contextos disponibles a les oracions fonts mitjançant l'ús d'auto-codificadors. L'ús de les característiques proposades demostra millores estadísticament significatives sobre sistemes de traducció robustos per a les tasques de traducció entre anglès i espanyol, i anglès i hindú. Finalment, explorem mètodes per avaluar la qualitat de les representacions de dades de text generades pels auto-codificadors, alhora que analitzem les propietats de les seves arquitectures. Com a resultat, proposem dues noves mètriques per quantificar la qualitat de les reconstruccions generades pels auto-codificadors: l'índex de preservació d'estructura (SCI, per les seves sigles en anglès) i l'índex d'acumulació de similitud (SAI, per les seves sigles en anglès). També presentem el concepte de dimensió crítica de coll d'ampolla (CBD, per les seves sigles en anglès), per sota de la qual la informació estructural es deteriora. Mostrem que, de manera interessant, la CBD està relacionada amb la perplexitat de la llengua.
Gupta, PA. (2017). Cross-view Embeddings for Information Retrieval [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/78457
TESIS
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Jianqiang. "Matching meaning for cross-language information retrieval." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/3212.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Library & Information Services. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
16

Nic, Gearailt Donnla Brighid. "Dictionary characteristics in cross-language information retrieval." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.619885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nyman, Marie, and Maria Patja. "Cross-language information retrieval : sökfrågestruktur & sökfrågeexpansion." Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-18892.

Full text
Abstract:
This Master’s thesis examines different retrieval strategies used in cross-language information retrieval (CLIR). The aim was to investigate if there were any differences between baseline queries and translated queries in retrieval effectiveness; how the retrieval effectiveness was affected by query structuring and if the results differed between different languages. The languages used in this study were Swedish, English and Finnish. 30 topics from the TrecUta collection were translated into Swedish and Finnish. Baseline queries in Swedish and Finnish were made and translated into English using a dictionary and thereby simulating automatic translation. The queries were expanded by adding all the translations from the main entries to the queries. Two kinds of queries – structured and unstructured – were designed. The queries were fed into the InQuery IR system which presented a list of retrieved documents where the relevant ones were marked. The performance of the queries was analysed by Query Performance Analyser (QPA). Average precision at seen relevant documents at DCV 10, average precision at DCV 10 and precision and recall at DCV 200 were used to measure the retrieval effectiveness. Despite the morphological differences between Swedish and Finnish, none or very small differences in retrieval performance were found, except when average precision at DCV 10 was used. The baseline queries performed the best results and the structured queries performed better in both Swedish and Finnish than the unstructured queries. The results are consistent with previous research.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO, and other styles
18

Ankaräng, Fredrik. "Generative Adversarial Networks for Cross-Lingual Voice Conversion." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299560.

Full text
Abstract:
Speech synthesis is a technology that increasingly influences our daily lives, in the form of smart assistants, advanced translation systems and similar applications. In this thesis, the phenomenon of making one’s voice sound like the voice of someone else is explored. This topic is called voice conversion and needs to be done without altering the linguistic content of speech. More specifically, a Cycle-Consistent Adversarial Network that has proven to work well in a monolingual setting, is evaluated in a multilingual environment. The model is trained to convert voices between native speakers from the Nordic countries. In the experiments no parallel, transcribed or aligned speech data is being used, forcing the model to focus on the raw audio signal. The goal of the thesis is to evaluate if performance is degraded in a multilingual environment, in comparison to monolingual voice conversion, and to measure the impact of the potential performance drop. In the study, performance is measured in terms of naturalness and speaker similarity between the generated speech and the target voice. For evaluation, listening tests are conducted, as well as objective comparisons of the synthesized speech. The results show that voice conversion between a Swedish and Norwegian speaker is possible and also that it can be performed without performance degradation in comparison to Swedish-to-Swedish conversion. Furthermore, conversion between Finnish and Swedish speakers, as well as Danish and Swedish speakers show a performance drop for the generated speech. However, despite the performance decrease, the model produces fluent and clearly articulated converted speech in all experiments. These results are noteworthy, especially since the network is trained on less than 15 minutes of nonparallel speaker data for each speaker. This thesis opens up for further areas of research, for instance investigating more languages, more recent Generative Adversarial Network architectures and devoting more resources to tweaking the hyperparameters to further optimize the model for multilingual voice conversion.
Talsyntes är ett område som allt mer influerar vår vardag, exempelvis genom smarta assistenter, avancerade översättningssystem och liknande användningsområden. I det här examensarbetet utforskas fenomenet röstkonvertering, som innebär att man får en talare att låta som någon annan, utan att det som sades förändras. Mer specifikt undersöks ett Cycle-Consistent Adversarial Network som fungerat väl för röstkonvertering inom ett enskilt språk för röstkonvertering mellan olika språk. Det neurala nätverket tränas för konvertering mellan röster från olika modersmålstalare från de nordiska länderna. I experimenten används ingen parallell eller transkriberad data, vilket tvingar modellen att endast använda sig av ljudsignalen. Målet med examensarbetet är att utvärdera om modellens prestanda försämras i en flerspråkig kontext, jämfört med en enkelspråkig sådan, samt mäta hur stor försämringen i sådant fall är. I studien mäts prestanda i termer av kvalitet och talarlikhet för det genererade talet och rösten som efterliknas. För att utvärdera detta genomförs lyssningstester, samt objektiva analyser av det genererade talet. Resultaten visar att röstkonvertering mellan en svensk och norsk talare är möjlig utan att modellens prestanda försämras, jämfört med konvertering mellan svenska talare. För konvertering mellan finska och svenska talare, samt danska och svenska talare försämrades däremot kvaliteten av det genererade talet. Trots denna försämring producerade modellen tydligt och sammanhängande tal i samtliga experiment. Det här är anmärkningsvärt eftersom modellen tränades på mindre än 15 minuter icke-parallel data för varje talare. Detta examensarbete öppnar upp för nya framtida studier, exempelvis skulle fler språk kunna inkluderas eller nyare varianter av typen Generative Adversarial Network utvärderas. Mer resurser skulle även kunna läggas på att optimera hyperparametrarna för att ytterligare optimera den undersökta modellen för flerspråkig röstkonvertering.
APA, Harvard, Vancouver, ISO, and other styles
19

Adriani, Mirna. "A query ambiguity model for cross-language information retrieval." Thesis, University of Glasgow, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Loza, Christian. "Cross Language Information Retrieval for Languages with Scarce Resources." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12157/.

Full text
Abstract:
Our generation has experienced one of the most dramatic changes in how society communicates. Today, we have online information on almost any imaginable topic. However, most of this information is available in only a few dozen languages. In this thesis, I explore the use of parallel texts to enable cross-language information retrieval (CLIR) for languages with scarce resources. To build the parallel text I use the Bible. I evaluate different variables and their impact on the resulting CLIR system, specifically: (1) the CLIR results when using different amounts of parallel text; (2) the role of paraphrasing on the quality of the CLIR output; (3) the impact on accuracy when translating the query versus translating the collection of documents; and finally (4) how the results are affected by the use of different dialects. The results show that all these variables have a direct impact on the quality of the CLIR system.
APA, Harvard, Vancouver, ISO, and other styles
21

Loza, Christian E. Mihalcea Rada F. "Cross language information retrieval for languages with scarce resources." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/ark:/67531/metadc12157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, Chengye. "Peer to peer English/Chinese cross-language information retrieval." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/26444/1/Chengye_Lu_Thesis.pdf.

Full text
Abstract:
Peer to peer systems have been widely used in the internet. However, most of the peer to peer information systems are still missing some of the important features, for example cross-language IR (Information Retrieval) and collection selection / fusion features. Cross-language IR is the state-of-art research area in IR research community. It has not been used in any real world IR systems yet. Cross-language IR has the ability to issue a query in one language and receive documents in other languages. In typical peer to peer environment, users are from multiple countries. Their collections are definitely in multiple languages. Cross-language IR can help users to find documents more easily. E.g. many Chinese researchers will search research papers in both Chinese and English. With Cross-language IR, they can do one query in Chinese and get documents in two languages. The Out Of Vocabulary (OOV) problem is one of the key research areas in crosslanguage information retrieval. In recent years, web mining was shown to be one of the effective approaches to solving this problem. However, how to extract Multiword Lexical Units (MLUs) from the web content and how to select the correct translations from the extracted candidate MLUs are still two difficult problems in web mining based automated translation approaches. Discovering resource descriptions and merging results obtained from remote search engines are two key issues in distributed information retrieval studies. In uncooperative environments, query-based sampling and normalized-score based merging strategies are well-known approaches to solve such problems. However, such approaches only consider the content of the remote database but do not consider the retrieval performance of the remote search engine. This thesis presents research on building a peer to peer IR system with crosslanguage IR and advance collection profiling technique for fusion features. Particularly, this thesis first presents a new Chinese term measurement and new Chinese MLU extraction process that works well on small corpora. An approach to selection of MLUs in a more accurate manner is also presented. After that, this thesis proposes a collection profiling strategy which can discover not only collection content but also retrieval performance of the remote search engine. Based on collection profiling, a web-based query classification method and two collection fusion approaches are developed and presented in this thesis. Our experiments show that the proposed strategies are effective in merging results in uncooperative peer to peer environments. Here, an uncooperative environment is defined as each peer in the system is autonomous. Peer like to share documents but they do not share collection statistics. This environment is a typical peer to peer IR environment. Finally, all those approaches are grouped together to build up a secure peer to peer multilingual IR system that cooperates through X.509 and email system.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Chengye. "Peer to peer English/Chinese cross-language information retrieval." Queensland University of Technology, 2008. http://eprints.qut.edu.au/26444/.

Full text
Abstract:
Peer to peer systems have been widely used in the internet. However, most of the peer to peer information systems are still missing some of the important features, for example cross-language IR (Information Retrieval) and collection selection / fusion features. Cross-language IR is the state-of-art research area in IR research community. It has not been used in any real world IR systems yet. Cross-language IR has the ability to issue a query in one language and receive documents in other languages. In typical peer to peer environment, users are from multiple countries. Their collections are definitely in multiple languages. Cross-language IR can help users to find documents more easily. E.g. many Chinese researchers will search research papers in both Chinese and English. With Cross-language IR, they can do one query in Chinese and get documents in two languages. The Out Of Vocabulary (OOV) problem is one of the key research areas in crosslanguage information retrieval. In recent years, web mining was shown to be one of the effective approaches to solving this problem. However, how to extract Multiword Lexical Units (MLUs) from the web content and how to select the correct translations from the extracted candidate MLUs are still two difficult problems in web mining based automated translation approaches. Discovering resource descriptions and merging results obtained from remote search engines are two key issues in distributed information retrieval studies. In uncooperative environments, query-based sampling and normalized-score based merging strategies are well-known approaches to solve such problems. However, such approaches only consider the content of the remote database but do not consider the retrieval performance of the remote search engine. This thesis presents research on building a peer to peer IR system with crosslanguage IR and advance collection profiling technique for fusion features. Particularly, this thesis first presents a new Chinese term measurement and new Chinese MLU extraction process that works well on small corpora. An approach to selection of MLUs in a more accurate manner is also presented. After that, this thesis proposes a collection profiling strategy which can discover not only collection content but also retrieval performance of the remote search engine. Based on collection profiling, a web-based query classification method and two collection fusion approaches are developed and presented in this thesis. Our experiments show that the proposed strategies are effective in merging results in uncooperative peer to peer environments. Here, an uncooperative environment is defined as each peer in the system is autonomous. Peer like to share documents but they do not share collection statistics. This environment is a typical peer to peer IR environment. Finally, all those approaches are grouped together to build up a secure peer to peer multilingual IR system that cooperates through X.509 and email system.
APA, Harvard, Vancouver, ISO, and other styles
24

Raithel, Lisa. "Cross-lingual Information Extraction for the Assessment and Prevention of Adverse Drug Reactions." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG011.

Full text
Abstract:
Les travaux décrits dans cette thèse portent sur la détection et l'extraction trans- et multilingue des effets indésirables des médicaments dans des textes biomédicaux rédigés par des non-spécialistes. Dans un premier temps, je décris la création d'un nouveau corpus trilingue (allemand, français, japonais), centré sur l'allemand et le français, ainsi que le développement de directives, applicables à toutes les langues, pour l'annotation de contenus textuels produits par des utilisateurs de médias sociaux. Enfin, je décris le processus d'annotation et fournis un aperçu du jeu de données obtenu. Dans un second temps, j'aborde la question de la confidentialité en matière d'utilisation de données de santé à caractère personnel. Enfin, je présente un prototype d'étude sur la façon dont les utilisateurs réagissent lorsqu'ils sont directement interrogés sur leurs expériences en matière d'effets indésirables liés à la prise de médicaments. L'étude révèle que la plupart des utilisateurs ne voient pas d'inconvénient à décrire leurs expériences quand demandé, mais que la collecte de données pourrait souffrir de la présence d'un trop grand nombre de questions. Dans un troisième temps, j'analyse les résultats d'une potentielle seconde méthode de collecte de données sur les médias sociaux, à savoir la génération automatique de pseudo-tweets basés sur des messages Twitter réels. Dans cette analyse, je me concentre sur les défis que cette approche induit. Je conclus que de nombreuses erreurs de traduction subsistent, à la fois au niveau du sens du texte et des annotations. Je résume les leçons apprises et je présente des mesures potentielles pour améliorer les résultats. Dans un quatrième temps, je présente des résultats expérimentaux de classification translingue de documents, en anglais et en allemand, en ce qui concerne les effets indésirables des médicaments. Pour ce faire, j'ajuste les modèles de classification sur différentes configurations de jeux de données, d'abord sur des documents anglais, puis sur des documents allemands. Je constate que l'incorporation de données d'entraînement anglaises aide à la classification de documents pertinents en allemand, mais qu'elle n'est pas suffisante pour atténuer efficacement le déséquilibre naturel des classes des documents. Néanmoins, les modèles développés semblent prometteurs et pourraient être particulièrement utiles pour collecter davantage de textes, afin d'étendre le corpus actuel et d'améliorer la détection de documents pertinents pour d'autres langues. Dans un cinquième temps, je décris ma participation à la campagne d'évaluation n2c2 2022 de détection des médicaments qui est ensuite étendue de l'anglais à l'allemand, au français et à l'espagnol, utilisant des ensembles de données de différents sous-domaines. Je montre que le transfert trans- et multilingue fonctionne bien, mais qu'il dépend aussi fortement des types d'annotation et des définitions. Ensuite, je réutilise les modèles mentionnés précédemment pour mettre en évidence quelques résultats préliminaires sur le corpus présenté. J'observe que la détection des médicaments donne des résultats prometteurs, surtout si l'on considère que les modèles ont été ajustés sur des données d'un autre sous-domaine et appliqués sans réentraînement aux nouvelles données. En ce qui concerne la détection d'autres expressions médicales, je constate que la performance des modèles dépend fortement du type d'entité et je propose des moyens de gérer ce problème. Enfin, les travaux présentés sont résumés, et des perspectives sont discutées
The work described in this thesis deals with the cross- and multi-lingual detection and extraction of adverse drug reactions in biomedical texts written by laypeople. This includes the design and creation of a multi-lingual corpus, exploring ways to collect data without harming users' privacy and investigating whether cross-lingual data can mitigate class imbalance in document classification. It further addresses the question of whether zero- and cross-lingual learning can be successful in medical entity detection across languages. I describe the creation of a new tri-lingual corpus (German, French, Japanese) focusing on German and French, including the development of annotation guidelines applicable to any language and oriented towards user-generated texts. I further describe the annotation process and give an overview of the resulting dataset. The data is provided with annotations on four levels: document-level, for describing if a text contains ADRs or not; entity level for capturing relevant expressions; attribute level to further specify these expressions; The last level annotates relations to extract information on how the aforementioned entities interact. I then discuss the topic of user privacy in data about health-related issues and the question of how to collect such data for research purposes without harming the person's privacy. I provide a prototype study of how users react when they are directly asked about their experiences with ADRs. The study reveals that most people do not mind describing their experiences if asked, but that data collection might suffer from too many questions in the questionnaire. Next, I analyze the results of a potential second way of collecting social media data: the synthetic generation of pseudo-tweets based on real Twitter messages. In the analysis, I focus on the challenges this approach entails and find, despite some preliminary cleaning, that there are still problems to be found in the translations, both with respect to the meaning of the text and the annotated labels. I, therefore, give anecdotal examples of what can go wrong during automatic translation, summarize the lessons learned, and present potential steps for improvements. Subsequently, I present experimental results for cross-lingual document classification with respect to ADRs in English and German. For this, I fine-tuned classification models on different dataset configurations first on English and then on German documents, complicated by the strong label imbalance of either language's dataset. I find that incorporating English training data helps in the classification of relevant documents in German, but that it is not enough to mitigate the natural imbalance of document labels efficiently. Nevertheless, the developed models seem promising and might be particularly useful for collecting more texts describing experiences about side effects to extend the current corpus and improve the detection of relevant documents for other languages. Next, I describe my participation in the n2c2 2022 shared task of medication detection which is then extended from English to German, French and Spanish using datasets from different sub-domains based on different annotation guidelines. I show that the multi- and cross-lingual transfer works well but also strongly depends on the annotation types and definitions. After that, I re-use the discussed models to show some preliminary results on the presented corpus, first only on medication detection and then across all the annotated entity types. I find that medication detection shows promising results, especially considering that the models were fine-tuned on data from another sub-domain and applied in a zero-shot fashion to the new data. Regarding the detection of other medical expressions, I find that the performance of the models strongly depends on the entity type and propose ways to handle this. Lastly, the presented work is summarized and future steps are discussed
Die in dieser Dissertation beschriebene Arbeit befasst sich mit der mehrsprachigen Erkennung und Extraktion von unerwünschten Arzneimittelwirkungen in biomedizinischen Texten, die von Laien verfasst wurden. Ich beschreibe die Erstellung eines neuen dreisprachigen Korpus (Deutsch, Französisch, Japanisch) mit Schwerpunkt auf Deutsch und Französisch, einschließlich der Entwicklung von Annotationsrichtlinien, die für alle Sprachen gelten und sich an nutzergenerierten Texten orientieren. Weiterhin dokumentiere ich den Annotationsprozess und gebe einen Überblick über den resultierenden Datensatz. Anschließend gehe ich auf den Schutz der Privatsphäre der Nutzer in Bezug auf Daten über Gesundheitsprobleme ein. Ich präsentiere einen Prototyp zu einer Studie darüber, wie Nutzer reagieren, wenn sie direkt nach ihren Erfahrungen mit Nebenwirkungen befragt werden. Die Studie zeigt, dass die meisten Menschen nichts dagegen haben, ihre Erfahrungen zu schildern, wenn sie um Erlaubnis gefragt werden. Allerdings kann die Datenerhebung darunter leiden, dass der Fragebogen zu viele Fragen enthält. Als nächstes analysiere ich die Ergebnisse einer zweiten potenziellen Methode zur Datenerhebung in sozialen Medien, der synthetischen Generierung von Pseudo-Tweets, die auf echten Twitter-Nachrichten basieren. In der Analyse konzentriere ich mich auf die Herausforderungen, die dieser Ansatz mit sich bringt, und zeige, dass trotz einer vorläufigen Bereinigung noch Probleme in den Übersetzungen zu finden sind, sowohl was die Bedeutung des Textes als auch die annotierten Tags betrifft. Ich gebe daher anekdotische Beispiele dafür, was bei einer maschinellen Übersetzung schiefgehen kann, fasse die gewonnenen Erkenntnisse zusammen und stelle potenzielle Verbesserungsmaßnahmen vor. Weiterhin präsentiere ich experimentelle Ergebnisse für die Klassifizierung mehrsprachiger Dokumente bezüglich medizinischer Nebenwirkungen im Englischen und Deutschen. Dazu wurden Klassifikationsmodelle an verschiedenen Datensatzkonfigurationen verfeinert (fine-tuning), zunächst an englischen und dann an deutschen Dokumenten. Dieser Ansatz wurde durch das starke Ungleichgewicht der Labels in den beiden Datensätzen verkompliziert. Die Ergebnisse zeigen, dass die Einarbeitung englischer Trainingsdaten bei der Klassifizierung relevanter deutscher Dokumente hilft, aber nicht ausreicht, um das natürliche Ungleichgewicht der Dokumentenklassen wirksam abzuschwächen. Dennoch scheinen die entwickelten Modelle vielversprechend zu sein und könnten besonders nützlich sein, um weitere Texte zu sammeln. Dieser wiederum können das aktuelle Korpus erweitern und damit die Erkennung relevanter Dokumente für andere Sprachen verbessern. Nachfolgend beschreibe ich die Teilnahme am n2c2 2022 Shared Task zur Erkennung von Medikamenten. Die Ansätze des Shared Task werden anschließend vom Englischen auf deutsche, französische und spanische Korpora ausgeweitet, indem Datensätze aus verschiedenen Teilbereichen verwendet werden, die auf unterschiedlichen Annotationsrichtlinien basieren. Ich zeige, dass die mehrsprachige Übertragung gut funktioniert, aber auch stark von den Annotationstypen und Definitionen abhängt. Im Anschluss verwende ich die besprochenen Modelle erneut, um einige vorläufige Ergebnisse für das vorgestellte Korpus zu zeigen, zunächst nur für die Erkennung von Medikamenten und dann für alle Arten von annotierten Entitäten. Die experimentellen Ergebnisse zeigen, dass die Medikamentenerkennung vielversprechende ist, insbesondere wenn man bedenkt, dass die Modelle an Daten aus einem anderen Teilbereich verfeinert und mit einem zeroshot Ansatz auf die neuen Daten angewendet wurden. In Bezug auf die Erkennung anderer medizinischer Ausdrücke stellt sich heraus,dass die Leistung der Modelle stark von der Art der Entität abhängt. Ich schlage deshalb Möglichkeiten vor, wie man dieses Problem in Zukunft angehen könnte
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Ying, and ying yzhang@gmail com. "Improved Cross-language Information Retrieval via Disambiguation and Vocabulary Discovery." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090224.114940.

Full text
Abstract:
Cross-lingual information retrieval (CLIR) allows people to find documents irrespective of the language used in the query or document. This thesis is concerned with the development of techniques to improve the effectiveness of Chinese-English CLIR. In Chinese-English CLIR, the accuracy of dictionary-based query translation is limited by two major factors: translation ambiguity and the presence of out-of-vocabulary (OOV) terms. We explore alternative methods for translation disambiguation, and demonstrate new techniques based on a Markov model and the use of web documents as a corpus to provide context for disambiguation. This simple disambiguation technique has proved to be extremely robust and successful. Queries that seek topical information typically contain OOV terms that may not be found in a translation dictionary, leading to inappropriate translations and consequent poor retrieval performance. Our novel OOV term translation method is based on the Chinese authorial practice of including unfamiliar English terms in both languages. It automatically extracts correct translations from the web and can be applied to both Chinese-English and English-Chinese CLIR. Our OOV translation technique does not rely on prior segmentation and is thus free from seg mentation error. It leads to a significant improvement in CLIR effectiveness and can also be used to improve Chinese segmentation accuracy. Good quality translation resources, especially bilingual dictionaries, are valuable resources for effective CLIR. We developed a system to facilitate construction of a large-scale translation lexicon of Chinese-English OOV terms using the web. Experimental results show that this method is reliable and of practical use in query translation. In addition, parallel corpora provide a rich source of translation information. We have also developed a system that uses multiple features to identify parallel texts via a k-nearest-neighbour classifier, to automatically collect high quality parallel Chinese-English corpora from the web. These two automatic web mining systems are highly reliable and easy to deploy. In this research, we provided new ways to acquire linguistic resources using multilingual content on the web. These linguistic resources not only improve the efficiency and effectiveness of Chinese-English cross-language web retrieval; but also have wider applications than CLIR.
APA, Harvard, Vancouver, ISO, and other styles
26

Sagen, Markus. "Large-Context Question Answering with Cross-Lingual Transfer." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440704.

Full text
Abstract:
Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. Two of the most prominent currently are the lack of high performing non-English pre-trained models, and the limited number of words most trained models can incorporate for their context. Solving these problems would make NLP models more suitable for real-world applications, improving information retrieval, reading comprehension, and more. All previous research has focused on incorporating long-context for English language models. This thesis investigates the cross-lingual transferability between languages when only training for long-context in English. Training long-context models in English only could make long-context in low-resource languages, such as Swedish, more accessible since it is hard to find such data in most languages and costly to train for each language. This could become an efficient method for creating long-context models in other languages without the need for such data in all languages or pre-training from scratch. We extend the models’ context using the training scheme of the Longformer architecture and fine-tune on a question-answering task in several languages. Our evaluation could not satisfactorily confirm nor deny if transferring long-term context is possible for low-resource languages. We believe that using datasets that require long-context reasoning, such as a multilingual TriviaQAdataset, could demonstrate our hypothesis’s validity.
APA, Harvard, Vancouver, ISO, and other styles
27

Orengo, Viviane Moreira. "Assessing relevance using automatically translated documents for cross-language information retrieval." Thesis, Middlesex University, 2004. http://eprints.mdx.ac.uk/13606/.

Full text
Abstract:
This thesis focuses on the Relevance Feedback (RF) process, and the scenario considered is that of a Portuguese-English Cross-Language Information Retrieval (CUR) system. CUR deals with the retrieval of documents in one natural language in response to a query expressed in another language. RF is an automatic process for query reformulation. The idea behind it is that users are unlikely to produce perfect queries, especially if given just one attempt. The process aims at improving the queryspecification, which will lead to more relevant documents being retrieved. The method consists of asking the user to analyse an initial sample of documents retrieved in response to a query and judge them for relevance. In that context, two main questions were posed. The first one relates to the user's ability in assessing the relevance of texts in a foreign language, texts hand translated into their language and texts automatically translated into their language. The second question concerns the relationship between the accuracy of the participant's judgements and the improvement achieved through the RF process. In order to answer those questions, this work performed an experiment in which Portuguese speakers were asked to judge the relevance of English documents, documents hand-translated to Portuguese, and documents automatically translated to Portuguese. The results show that machine translation is as effective as hand translation in aiding users to assess relevance. In addition, the impact of misjudged documents on the performance of RF is overall just moderate, and varies greatly for different query topics. This work advances the existing research on RF by considering a CUR scenario and carrying out user experiments, which analyse aspects of RF and CUR that remained unexplored until now. The contributions of this work also include: the investigation of CUR using a new language pair; the design and implementation of a stemming algorithm for Portuguese; and the carrying out of several experiments using Latent Semantic Indexing which contribute data points to the CUR theory.
APA, Harvard, Vancouver, ISO, and other styles
28

Wigder, Chaya. "Word embeddings for monolingual and cross-language domain-specific information retrieval." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233028.

Full text
Abstract:
Various studies have shown the usefulness of word embedding models for a wide variety of natural language processing tasks. This thesis examines how word embeddings can be incorporated into domain-specific search engines for both monolingual and cross-language search. This is done by testing various embedding model hyperparameters, as well as methods for weighting the relative importance of words to a document or query. In addition, methods for generating domain-specific bilingual embeddings are examined and tested. The system was compared to a baseline that used cosine similarity without word embeddings, and for both the monolingual and bilingual search engines the use of monolingual embedding models improved performance above the baseline. However, bilingual embeddings, especially for domain-specific terms, tended to be of too poor quality to be used directly in the search engines.
Flera studier har visat att ordinbäddningsmodeller är användningsbara för många olika språkteknologiuppgifter. Denna avhandling undersöker hur ordinbäddningsmodeller kan användas i sökmotorer för både enspråkig och tvärspråklig domänspecifik sökning. Experiment gjordes för att optimera hyperparametrarna till ordinbäddningsmodellerna och för att hitta det bästa sättet att vikta ord efter hur viktiga de är i dokumentet eller sökfrågan. Dessutom undersöktes metoder för att skapa domänspecifika tvåspråkiga inbäddningar. Systemet jämfördes med en baslinje utan inbäddningar baserad på cosinuslikhet, och för både enspråkiga och tvärspråkliga sökningar var systemet som använde enspråkiga inbäddningar bättre än baslinjen. Däremot var de tvåspråkiga inbäddningarna, särskilt för domänspecifika ord, av låg kvalitet och gav för dåliga resultat för direkt användning inom sökmotorer.
APA, Harvard, Vancouver, ISO, and other styles
29

Alazemi, Awatef M. "A new methodology for designing a multi-lingual bio-ontology : an application to Arabic-English bio-information retrieval." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26507/.

Full text
Abstract:
Ontologies are becoming increasingly important in the biomedical domain since they enable knowledge sharing in a formal, homogeneous and unambiguous way. Furthermore, biological discoveries are being reported at an extremely rapid rate. This new information is found in diverse resources that encompass a broad array of journal articles and public databases associated with different sub-disciplines within biology and medicine in different languages. However, finding relevant multilingual biological dedicated ontology to the digestive system ontology among a large collection of information is recognized as a critical knowledge gap in science. Consequently, this research argues the real need to highlight the area of ontology in a sense of searching in bio-lingual, representing concepts and inter-concept relationships. English-Arabic human digestive system ontology (DISUS) and its methodology were created to demonstrate the above notion. The approach adopted for this research involved creating a new integrated reengineered methodology for a novel first attempt multilingual (English-Arabic) bio-ontology for the purpose of information retrieval and knowledge discovery. The targeted DISUS ontology is to represent digestive system knowledge and to ease knowledge sharing among the end users in the biology and medicine context .The integrated generic methodology is constitutes of four phases the planning phase which shed light on the scope and purpose of the domain and the functioning of knowledge acquisition, the conceptualisation phase organizes unstructured knowledge to structured. The ontology construction which involves the integration and merging among the core and sub-ontologies. The evaluation phase which finalizes the whole work and this is executed by domain experts. Evaluation of multilingual DISUS carried out through qualitative and quantitative approaches with biological and medical experts, validation was utilized through information retrieval technique and has revealed the effectiveness and robustness of using DISUS ontology as a way for concept mapping between Arabic-English ontologies terms for bilingual searches.
APA, Harvard, Vancouver, ISO, and other styles
30

Suyoto, Iman S. H., and ishs@ishs net. "Cross-Domain Content-Based Retrieval of Audio Music through Transcription." RMIT University. Computer Science and Information Technology, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090527.092841.

Full text
Abstract:
Research in the field of music information retrieval (MIR) is concerned with methods to effectively retrieve a piece of music based on a user's query. An important goal in MIR research is the ability to successfully retrieve music stored as recorded audio using note-based queries. In this work, we consider the searching of musical audio using symbolic queries. We first examined the effectiveness of using a relative pitch approach to represent queries and pieces. Our experimental results revealed that this technique, while effective, is optimal when the whole tune is used as a query. We then suggested an algorithm involving the use of pitch classes in conjunction with the longest common subsequence algorithm between a query and target, also using the whole tune as a query. We also proposed an algorithm that works effectively when only a small part of a tune is used as a query. The algorithm makes use of a sliding window in addition to pitch classes and the longest common subsequence algorithm between a query and target. We examined the algorithm using queries based on the beginning, middle, and ending parts of pieces. We performed experiments on an audio collection and manually-constructed symbolic queries. Our experimental evaluation revealed that our techniques are highly effective, with most queries used in our experiments being able to retrieve a correct answer in the first rank position. In addition, we examined the effectiveness of duration-based features for improving retrieval effectiveness over the use of pitch only. We investigated note durations and inter-onset intervals. For this purpose, we used solely symbolic music so that we could focus on the core of the problem. A relative pitch approach alongside a relative duration representation were used in our experiments. Our experimental results showed that durations fail to significantly improve retrieval effectiveness, whereas inter-onset intervals significantly improve retrieval effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
31

Hieber, Felix [Verfasser], and Stefan [Akademischer Betreuer] Riezler. "Translation-based Ranking in Cross-Language Information Retrieval / Felix Hieber ; Betreuer: Stefan Riezler." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180396189/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cederlund, Petter. "Cross-Language Information Retrieval : En granskning av tre översättningsmetoder använda i experimentell CLIR-forskning." Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20775.

Full text
Abstract:
The purpose of this paper is to examine the three main translation methods used in experimental Cross-language Information Retrieval CLIR research today, namely translation using either machine-readable dictionaries, machine translation systems or corpus-based methods. Working notes from research groups participating in the Text Retrieval Conference TREC and the Cross-Language Evaluation Forum CLEF between 1997 and 2000 have provided the main source material used to discuss the possible advantages and drawbacks that each method presents. It appears that all three approaches have their pros and cons, and because the different researchers tend to favour their own chosen method, it is not possible to establish a "winner approach" to CLIR translation by studying the working notes alone. One should remember however that the present interest in cross-language-applications of information retrieval has arisen as late as in the 1990s, and thus the research is yet in its early stages. The methods discussed in this paper may well be improved, or perhaps replaced by others in the future.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO, and other styles
33

Boström, Anna. "Cross-Language Information Retrieval : En studie av lingvistiska problem och utvecklade översättningsmetoder för lösningar angående informationsåtervinning över språkliga gränser." Thesis, Umeå University, Sociology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1017.

Full text
Abstract:

Syftet med denna uppsats är att undersöka problem samt lösningar i relation till informationsåtervinning över språkliga gränser. Metoden som har använts i uppsatsen är studier av forskningsmaterial inom lingvistik samt främst den relativt nya forskningsdisciplinen Cross-Language Information Retrieval (CLIR). I uppsatsen hävdas att världens alla olikartade språk i dagsläget måste betraktas som ett angeläget problem för informationsvetenskapen, ty språkliga skillnader utgör ännu ett stort hinder för den internationella informationsåtervinning som tekniska framsteg, uppkomsten av Internet, digitala bibliotek, globalisering, samt stora politiska förändringar i ett flertal länder runtom i världen under de senaste åren tekniskt och teoretiskt sett har möjliggjort. I uppsatsens första del redogörs för några universellt erkända lingvistiska skillnader mellan olika språk – i detta fall främst med exempel från europeiska språk – och vanliga problem som dessa kan bidra till angående översättningar från ett språk till ett annat. I uppsatsen hävdas att dessa skillnader och problem även måste anses som relevanta när det gäller informationsåtervinning över språkliga gränser. Uppsatsen fortskrider med att ta upp ämnet Cross-Language Information Retrieval (CLIR), inom vilken lösningar på flerspråkighet och språkskillnader inom informationsåtervinning försöker utvecklas och förbättras. Målet med CLIR är att en informationssökare så småningom skall kunna söka information på sitt modersmål men ändå hitta relevant information på flera andra språk. Ett ytterligare mål är att den återfunna informationen i sin helhet även skall kunna översättas till ett för sökaren önskat språk. Fyra olika översättningsmetoder som i dagsläget finns utvecklade inom CLIR för att automatiskt kunna översätta sökfrågor, ämnesord, eller, i vissa fall, hela dokument åt en informationssökare med lite eller ingen alls kunskap om det språk som han eller hon söker information på behandlas därefter. De fyra metoderna – identifierade som maskinöversättning, tesaurus- och ordboksöversättning, korpusbaserad översättning, samt ingen översättning – diskuteras även i relation till de lingvistiska problem och skillnader som har tagits upp i uppsatsens första del. Resultatet visar att språk är någonting mycket komplext och att de olika metoderna som hittills finns utvecklade ofta kan lösa något eller några av de uppmärksammade lingvistiska översättningssvårigheterna. Dock finns det inte någon utvecklad metod som i dagsläget kan lösa samtliga problem. Uppsatsen uppmärksammar emellertid även att CLIR-forskarna i hög grad är medvetna om de nuvarande metodernas uppenbara begränsningar och att man prövar att lösa detta genom att försöka kombinera flera olika översättningsmetoder i ett CLIR-system. Avslutningsvis redogörs även för CLIR-forskarnas förväntningar och förhoppningar inför framtiden.


This essay deals with information retrieval across languages by examining different types of literature in the research areas of linguistics and multilingual information retrieval. The essay argues that the many different languages that co-exist around the globe must be recognised as an essential obstacle for information science. The language barrier today remains a major impediment for the expansion of international information retrieval otherwise made technically and theoretically possible over the last few years by new technical developments, the Internet, digital libraries, globalisation, and moreover many political changes in several countries around the world. The first part of the essay explores linguistic differences and difficulties related to general translations from one language to another, using examples from mainly European languages. It is suggested that these problems and differences also must be acknowledged and regarded as highly important when it comes to information retrieval across languages. The essay continues by reporting on Cross-Language Information Retrieval (CLIR), a relatively new research area where methods for multilingual information retrieval are studied and developed. The object of CLIR is that people in the future shall be able to search for information in their native tongue, but still find relevant information in more than one language. Another goal for the future is the possibility to translate complete documents into a person’s language of preference. The essay reports on four different CLIR-methods currently established for automatically translating queries, subject headings, or, in some cases, complete documents, and thus aid people with little or no knowledge of the language in which he or she is looking for information. The four methods – identified as machine translation, translations using a multilingual thesaurus or a manually produced machine readable dictionary, corpus-based translation, and no translation – are discussed in relation to the linguistic translation difficulties mentioned in the paper’s initial part. The conclusion drawn is that language is exceedingly complex and that while the different CLIR-methods currently developed often can solve one or two of the acknowledged linguistic difficulties, none is able to overcome all. The essay also show, however, that CLIR-scientists are highly aware of the limitations of the different translation methods and that many are trying to get to terms with this by incorporating several sources of translation in one single CLIR-system. The essay finally concludes by looking at CLIR-scientists’ expectations and hopes for the future.

APA, Harvard, Vancouver, ISO, and other styles
34

Wong, Kim-Yung Eddie. "Automatic spoken language identification utilizing acoustic and phonetic speech information." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/37259/1/Kim-Yung_Wong_Thesis.pdf.

Full text
Abstract:
Automatic spoken Language Identi¯cation (LID) is the process of identifying the language spoken within an utterance. The challenge that this task presents is that no prior information is available indicating the content of the utterance or the identity of the speaker. The trend of globalization and the pervasive popularity of the Internet will amplify the need for the capabilities spoken language identi¯ca- tion systems provide. A prominent application arises in call centers dealing with speakers speaking di®erent languages. Another important application is to index or search huge speech data archives and corpora that contain multiple languages. The aim of this research is to develop techniques targeted at producing a fast and more accurate automatic spoken LID system compared to the previous National Institute of Standards and Technology (NIST) Language Recognition Evaluation. Acoustic and phonetic speech information are targeted as the most suitable fea- tures for representing the characteristics of a language. To model the acoustic speech features a Gaussian Mixture Model based approach is employed. Pho- netic speech information is extracted using existing speech recognition technol- ogy. Various techniques to improve LID accuracy are also studied. One approach examined is the employment of Vocal Tract Length Normalization to reduce the speech variation caused by di®erent speakers. A linear data fusion technique is adopted to combine the various aspects of information extracted from speech. As a result of this research, a LID system was implemented and presented for evaluation in the 2003 Language Recognition Evaluation conducted by the NIST.
APA, Harvard, Vancouver, ISO, and other styles
35

Geraldo, André Pinto. "Aplicando algoritmos de mineração de regras de associação para recuperação de informações multilíngues." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/26506.

Full text
Abstract:
Este trabalho propõe a utilização de algoritmos de mineração de regras de associação para a Recuperação de Informações Multilíngues. Esses algoritmos têm sido amplamente utilizados para analisar transações de registro de vendas. A ideia é mapear o problema de encontrar associações entre itens vendidos para o problema de encontrar termos equivalentes entre idiomas diferentes em um corpus paralelo. A proposta foi validada por meio de experimentos com diferentes idiomas, conjuntos de consultas e corpora. Os resultados mostram que a eficácia da abordagem proposta é comparável ao estado da arte, ao resultado monolíngue e à tradução automática de consultas, embora este utilize técnicas mais complexas de processamento de linguagem natural. Foi criado um protótipo que faz consultas à Web utilizando o método proposto. O sistema recebe palavras-chave em português, as traduz para o inglês e submete a consulta a diversos sites de busca.
This work proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyze market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using different languages, queries and corpora. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. A prototype for cross-language web querying was implemented to test the proposed method. The system accepts keywords in Portuguese, translates them into English and submits the query to several web-sites that provide search functionalities.
APA, Harvard, Vancouver, ISO, and other styles
36

Bergstedt, Kenneth. "Lost in translation? En empirisk undersökning av användningen av tesaurer vid queryexpansion inom Cross Language Information Retrieval." Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16903.

Full text
Abstract:
The purpose of this thesis is to examine the performance of queries that is expanded before translation in comparison with only translation of the queries using a bilingual dictionary, and also to see if the number of terms that was used to expand the queries was of any importance i. e. if many terms from a thesaurus helped or destroyed a query. To answer these questions i used two online thesauri, Rogets thesaurus and Merriam-Webster Online and one printed bilingual dictionary, Norstedts English-Swedish dictionary. Even though the number of examined queries is too small to draw any definite conclusions, the results suggest that expanding using a general thesaurus may have a negative effect on the queries. The reason is that the number of words from the expansion and the translation makes the queries more ambiguous and thereby increases the noise in the search, which leads to loss of relevant document.
Uppsatsnivå: D
APA, Harvard, Vancouver, ISO, and other styles
37

Richardson, W. Ryan. "Using Concept Maps as a Tool for Cross-Language Relevance Determination." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/28191.

Full text
Abstract:
Concept maps, introduced by Novak, aid learnersâ understanding. I hypothesize that concept maps also can function as a summary of large documents, e.g., electronic theses and dissertations (ETDs). I have built a system that automatically generates concept maps from English-language ETDs in the computing field. The system also will provide Spanish translations of these concept maps for native Spanish speakers. Using machine translation techniques, my approach leads to concept maps that could allow researchers to discover pertinent dissertations in languages they cannot read, helping them to decide if they want a potentially relevant dissertation translated. I am using a state-of-the-art natural language processing system, called Relex, to extract noun phrases and noun-verb-noun relations from ETDs, and then produce concept maps automatically. I also have incorporated information from the table of contents of ETDs to create novel styles of concept maps. I have conducted five user studies, to evaluate user perceptions about these different map styles. I am using several methods to translate node and link text in concept maps from English to Spanish. Nodes labeled with single words from a given technical area can be translated using wordlists, but phrases in specific technical fields can be difficult to translate. Thus I have amassed a collection of about 580 Spanish-language ETDs from Scirus and two Mexican universities and I am using this corpus to mine phrase translations that I could not find otherwise. The usefulness of the automatically-generated and translated concept maps has been assessed in an experiment at Universidad de las Americas (UDLA) in Puebla, Mexico. This experiment demonstrated that concept maps can augment abstracts (translated using a standard machine translation package) in helping Spanish speaking users find ETDs of interest.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Qureshi, Karl. "Att maskinöversätta sökfrågor : En studie av Google Translate och Bing Translators förmåga att översätta svenska sammansättningar i ett CLIR-perspektiv." Thesis, Umeå universitet, Sociologiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-131813.

Full text
Abstract:
Syftet med denna uppsats är att undersöka hur väl Google Translate respektive Bing Translator fungerar vid översättning av sökfrågor med avseende på svenska sammansättningar, samt försöka utröna huruvida det finns något samband mellan utfallet och sammansättningarnas komplexitet. Testmiljön utgörs av Europaparlamentets offentliga dokumentregister. Undersökningen är emellertid begränsad till Europeiska rådets handlingar, som till antalet är 1 334 på svenska respektive 1 368 på engelska. Analysen av data har dels skett utifrån precision och återvinningsgrad, dels utifrån en kontrastiv analys för att kunna ge en mer enhetlig bild på det undersökta fenomenet. Resultatet visar att medelvärdet varierar mellan 0,287 och 0,506 för precision samt 0,400 och 0,614 för återvinningsgrad beroende på ordtyp och översättningstjänst. Vidare visar resultatet att det inte tycks finnas något tydligt samband mellan effektivitet och sammansättningarnas komplexitet. I stället tycks de lägre värdena bero på synonymi, och då gärna inom själva sammansättningen, samt hyponymi. I det senare fallet beror det dels på översättningstjänsternas oförmåga att återge lämpliga översättningar, dels på det engelska språkets tendens att bilda sammansättningar med lösa substantivattribut.
APA, Harvard, Vancouver, ISO, and other styles
39

Franco, Salvador Marc. "A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/84285.

Full text
Abstract:
Natural Language Processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human languages. One of its most challenging aspects involves enabling computers to derive meaning from human natural language. To do so, several meaning or context representations have been proposed with competitive performance. However, these representations still have room for improvement when working in a cross-domain or cross-language scenario. In this thesis we study the use of knowledge graphs as a cross-domain and cross-language representation of text and its meaning. A knowledge graph is a graph that expands and relates the original concepts belonging to a set of words. We obtain its characteristics using a wide-coverage multilingual semantic network as knowledge base. This allows to have a language coverage of hundreds of languages and millions human-general and -specific concepts. As starting point of our research we employ knowledge graph-based features - along with other traditional ones and meta-learning - for the NLP task of single- and cross-domain polarity classification. The analysis and conclusions of that work provide evidence that knowledge graphs capture meaning in a domain-independent way. The next part of our research takes advantage of the multilingual semantic network and focuses on cross-language Information Retrieval (IR) tasks. First, we propose a fully knowledge graph-based model of similarity analysis for cross-language plagiarism detection. Next, we improve that model to cover out-of-vocabulary words and verbal tenses and apply it to cross-language document retrieval, categorisation, and plagiarism detection. Finally, we study the use of knowledge graphs for the NLP tasks of community questions answering, native language identification, and language variety identification. The contributions of this thesis manifest the potential of knowledge graphs as a cross-domain and cross-language representation of text and its meaning for NLP and IR tasks. These contributions have been published in several international conferences and journals.
El Procesamiento del Lenguaje Natural (PLN) es un campo de la informática, la inteligencia artificial y la lingüística computacional centrado en las interacciones entre las máquinas y el lenguaje de los humanos. Uno de sus mayores desafíos implica capacitar a las máquinas para inferir el significado del lenguaje natural humano. Con este propósito, diversas representaciones del significado y el contexto han sido propuestas obteniendo un rendimiento competitivo. Sin embargo, estas representaciones todavía tienen un margen de mejora en escenarios transdominios y translingües. En esta tesis estudiamos el uso de grafos de conocimiento como una representación transdominio y translingüe del texto y su significado. Un grafo de conocimiento es un grafo que expande y relaciona los conceptos originales pertenecientes a un conjunto de palabras. Sus propiedades se consiguen gracias al uso como base de conocimiento de una red semántica multilingüe de amplia cobertura. Esto permite tener una cobertura de cientos de lenguajes y millones de conceptos generales y específicos del ser humano. Como punto de partida de nuestra investigación empleamos características basadas en grafos de conocimiento - junto con otras tradicionales y meta-aprendizaje - para la tarea de PLN de clasificación de la polaridad mono- y transdominio. El análisis y conclusiones de ese trabajo muestra evidencias de que los grafos de conocimiento capturan el significado de una forma independiente del dominio. La siguiente parte de nuestra investigación aprovecha la capacidad de la red semántica multilingüe y se centra en tareas de Recuperación de Información (RI). Primero proponemos un modelo de análisis de similitud completamente basado en grafos de conocimiento para detección de plagio translingüe. A continuación, mejoramos ese modelo para cubrir palabras fuera de vocabulario y tiempos verbales, y lo aplicamos a las tareas translingües de recuperación de documentos, clasificación, y detección de plagio. Por último, estudiamos el uso de grafos de conocimiento para las tareas de PLN de respuesta de preguntas en comunidades, identificación del lenguaje nativo, y identificación de la variedad del lenguaje. Las contribuciones de esta tesis ponen de manifiesto el potencial de los grafos de conocimiento como representación transdominio y translingüe del texto y su significado en tareas de PLN y RI. Estas contribuciones han sido publicadas en diversas revistas y conferencias internacionales.
El Processament del Llenguatge Natural (PLN) és un camp de la informàtica, la intel·ligència artificial i la lingüística computacional centrat en les interaccions entre les màquines i el llenguatge dels humans. Un dels seus majors reptes implica capacitar les màquines per inferir el significat del llenguatge natural humà. Amb aquest propòsit, diverses representacions del significat i el context han estat proposades obtenint un rendiment competitiu. No obstant això, aquestes representacions encara tenen un marge de millora en escenaris trans-dominis i trans-llenguatges. En aquesta tesi estudiem l'ús de grafs de coneixement com una representació trans-domini i trans-llenguatge del text i el seu significat. Un graf de coneixement és un graf que expandeix i relaciona els conceptes originals pertanyents a un conjunt de paraules. Les seves propietats s'aconsegueixen gràcies a l'ús com a base de coneixement d'una xarxa semàntica multilingüe d'àmplia cobertura. Això permet tenir una cobertura de centenars de llenguatges i milions de conceptes generals i específics de l'ésser humà. Com a punt de partida de la nostra investigació emprem característiques basades en grafs de coneixement - juntament amb altres tradicionals i meta-aprenentatge - per a la tasca de PLN de classificació de la polaritat mono- i trans-domini. L'anàlisi i conclusions d'aquest treball mostra evidències que els grafs de coneixement capturen el significat d'una forma independent del domini. La següent part de la nostra investigació aprofita la capacitat\hyphenation{ca-pa-ci-tat} de la xarxa semàntica multilingüe i se centra en tasques de recuperació d'informació (RI). Primer proposem un model d'anàlisi de similitud completament basat en grafs de coneixement per a detecció de plagi trans-llenguatge. A continuació, vam millorar aquest model per cobrir paraules fora de vocabulari i temps verbals, i ho apliquem a les tasques trans-llenguatges de recuperació de documents, classificació, i detecció de plagi. Finalment, estudiem l'ús de grafs de coneixement per a les tasques de PLN de resposta de preguntes en comunitats, identificació del llenguatge natiu, i identificació de la varietat del llenguatge. Les contribucions d'aquesta tesi posen de manifest el potencial dels grafs de coneixement com a representació trans-domini i trans-llenguatge del text i el seu significat en tasques de PLN i RI. Aquestes contribucions han estat publicades en diverses revistes i conferències internacionals.
Franco Salvador, M. (2017). A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84285
TESIS
APA, Harvard, Vancouver, ISO, and other styles
40

Wilhelm, Thomas. "Entwurf und Implementierung eines Frameworks zur Analyse und Evaluation von Verfahren im Information Retrieval." Master's thesis, [S.l. : s.n.], 2008. https://monarch.qucosa.de/id/qucosa%3A18962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zitzelberger, Andrew J. "HyKSS: Hybrid Keyword and Semantic Search." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2832.

Full text
Abstract:
The rapid production of digital information makes the task of locating relevant information increasingly difficult. Keyword search alleviates this difficulty by retrieving documents containing keywords of interest. However, keyword search suffers from a number of issues such ambiguity, synonymy, and the inability to handle semantic constraints. Semantic search helps resolve these issues but is limited by the quality of annotations which are likely to be incomplete or imprecise. Hybrid search, a search technique that combines the merits of both keyword and semantic search, appears to be a promising solution. In this work we introduce HyKSS, a hybrid search system driven by extraction ontologies for both annotation creation and query interpretation. HyKSS is not limited to a single domain, but rather allows queries to cross ontological boundaries. We show that our hybrid search system, which uses a query driven dynamic ranking mechanism, outperforms keyword and semantic search in isolation, as well as a number of other non-HyKSS hybrid ranking approaches, over data sets of short topical documents. We also find that there is not a statistically significant difference between using multiple ontologies for query generation and simply selecting and using the best matching ontology.
APA, Harvard, Vancouver, ISO, and other styles
42

Schön, Ragnar. "A cross-cultural listener-based study on perceptual features in K-pop." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-178018.

Full text
Abstract:
Recent research within the Music Information Retrieval (MIR) field has shown the relevance of perceptual features for musical signals. The idea is to identify a small set of features that are natural descriptions from a perceptual perspective. The notion of perceptual features is based on the ecological approach to music, that is, focussing on sound events rather than spectral information. Furthermore, MIR research has had an overemphasis on Western music and listeners. This leads to the question of whether the concept of perceptual features is culturally independent or not. This was investigated by having listeners of two distinct cultural backgrounds (Swedish and Chinese) rating a set of eight perceptual features: dissonance, speed, rhythmic complexity, rhythmic clarity, articulation, harmonic complexity, modality and pitch. A culturally specific dataset consisting of Korean pop songs was used to provide the stimuli. This was a subset of a larger set of songs from a previous study selected based on genre and mood annotations to create a diverse dataset. The listener ratings were evaluated by a variety of statistical measures, including cross-correlation and ANOVA. It was found that there was a small but significant difference in the ratings of the perceptual features speed and rhythmic complexity between the two cultural groups.
APA, Harvard, Vancouver, ISO, and other styles
43

Beltrame, Walber Antonio Ramos. "Um sistema de disseminação seletiva da informação baseado em Cross-Document Structure Theory." Universidade Federal do Espírito Santo, 2011. http://repositorio.ufes.br/handle/10/6414.

Full text
Abstract:
Made available in DSpace on 2016-12-23T14:33:46Z (GMT). No. of bitstreams: 1 Dissertacao Walber.pdf: 1673761 bytes, checksum: 5ada541492a23b9653e4a80bea3aaa40 (MD5) Previous issue date: 2011-08-30
A System for Selective Dissemination of Information is a type of information system that aims to harness new intellectual products, from any source, for environments where the probability of interest is high. The inherent challenge is to establish a computational model that maps specific information needs, to a large audience, in a personalized way. Therefore, it is necessary to mediate informational structure of unit, so that includes a plurality of attributes to be considered by process of content selection. In recent publications, systems are proposed based on text markup data (meta-data models), so that treatment of manifest information between computing semi-structured data and inference mechanisms on meta-models. Such approaches only use the data structure associated with the profile of interest. To improve this characteristic, this paper proposes construction of a system for selective dissemination of information based on analysis of multiple discourses through automatic generation of conceptual graphs from texts, introduced in solution also unstructured data (text). The proposed model is motivated by Cross-Document Structure Theory, introduced in area of Natural Language Processing, focusing on automatic generation of summaries. The model aims to establish correlations between semantic of discourse, for example, if there are identical information, additional or contradictory between multiple texts. Thus, an aspects discussed in this dissertation is that these correlations can be used in process of content selection, which had already been shown in other related work. Additionally, the algorithm of the original model is revised in order to make it easy to apply
Um Sistema de Disseminação Seletiva da Informação é um tipo de Sistema de Informação que visa canalizar novas produções intelectuais, provenientes de quaisquer fontes, para ambientes onde a probabilidade de interesse seja alta. O desafio computacional inerente é estabelecer um modelo que mapeie as necessidades específicas de informação, para um grande público, de modo personalizado. Para tanto, é necessário mediar à estruturação da unidade informacional, de maneira que contemple a pluralidade de atributos a serem considerados pelo processo de seleção de conteúdo. Em recentes publicações acadêmicas, são propostos sistemas baseados em marcação de dados sobre textos (modelos de meta-dados), de forma que o tratamento da informação manifesta-se entre computação de dados semi-estruturados e mecanismos de inferência sobre meta-modelos. Tais abordagens utilizam-se apenas da associação da estrutura de dados com o perfil de interesse. Para aperfeiçoar tal característica, este trabalho propõe a construção de um sistema de disseminação seletiva da informação baseado em análise de múltiplos discursos por meio da geração automática de grafos conceituais a partir de textos, concernindo à solução também os dados não estruturados (textos). A proposta é motivada pelo modelo Cross-Document Structure Theory, recentemente difundido na área de Processamento de Língua Natural, voltado para geração automática de resumos. O modelo visa estabelecer correlações de natureza semântica entre discursos, por exemplo, se existem informações idênticas, adicionais ou contraditórias entre múltiplos textos. Desse modo, um dos aspectos discutidos nesta dissertação é que essas correlações podem ser usadas no processo de seleção de conteúdo, o que já fora evidenciado em outros trabalhos correlatos. Adicionalmente, o algoritmo do modelo original é revisado, a fim de torná-lo de fácil aplicabilidade
APA, Harvard, Vancouver, ISO, and other styles
44

Holmes, Monica C. (Monica Cynthia). "The Relationships of Cross-Cultural Differences to the Values of Information Systems Professionals within the Context of Systems Development." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc279348/.

Full text
Abstract:
Several studies have suggested that the effect of cultural differences among Information Systems (IS) professionals from different nations on the development and implementation of IS could be important. However, IS research has generally not considered culture when investigating the process of systems development. This study examined the relationship between the cultural backgrounds of IS designers and their process-related values with a field survey in Singapore, Taiwan, the United Kingdom and the United States. Hofstede's (1980) value survey module (i.e., Power Distance (PDI), Uncertainty Avoidance (UAI), InDiVidualism (IDV) and MASculininity/femininity) and Kumar's (1984) process-related values (i.e., technical, economic, and socio-political) were utilized in the data collection. The hypotheses tested were: whether the IS professionals differed on (H.,) their cultural dimensions based on country of origin, (Hg) their process-related values based on country of origin, and (H3) whether a relationship between their cultural dimensions and their process-related values existed. The countries were significantly different on their PDI, UAI and MAS, but not on their IDV. They significantly differed on their technical and sociopolitical values but not on their economic values. IDV and MAS significantly correlated with the process-related values in Singapore, Taiwan and the United States. In the United Kingdom, UAI significantly correlated with socio-political values; and MAS significantly correlated with technical and socio-political values. In Taiwan, UAI significantly correlated with technical and economic values. PDI did not illustrate any significant correlation with the IS process-related values in all four countries. In Singapore and the United States, UAI did not significantly correlate with any of these values. The results provide evidence that IS professionals differ on most of their cultural dimensions and IS process-related values. While IDV and MAS could be useful for examining the relationship between culture and systems development, research involving PDI and UAI might be of questionable benefit.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Bo. "Mesurer et améliorer la qualité des corpus comparables." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM069.

Full text
Abstract:
Les corpus bilingues sont des ressources essentielles pour s'affranchir de la barrière de la langue en traitement automatique des langues (TAL) dans un contexte multilingue. La plupart des travaux actuels utilisent des corpus parallèles qui sont surtout disponibles pour des langues majeurs et pour des domaines spécifiques. Les corpus comparables, qui rassemblent des textes comportant des informations corrélées, sont cependant moins coûteux à obtenir en grande quantité. Plusieurs travaux antérieurs ont montré que l'utilisation des corpus comparables est bénéfique à différentes taches en TAL. En parallèle à ces travaux, nous proposons dans cette thèse d'améliorer la qualité des corpus comparables dans le but d'améliorer les performances des applications qui les exploitent. L'idée est avantageuse puisqu'elle peut être utilisée avec n'importe quelle méthode existante reposant sur des corpus comparables. Nous discuterons en premier la notion de comparabilité inspirée des expériences d'utilisation des corpus bilingues. Cette notion motive plusieurs implémentations de la mesure de comparabilité dans un cadre probabiliste, ainsi qu'une méthodologie pour évaluer la capacité des mesures de comparabilité à capturer un haut niveau de comparabilité. Les mesures de comparabilité sont aussi examinées en termes de robustesse aux changements des entrées du dictionnaire. Les expériences montrent qu'une mesure symétrique s'appuyant sur l'entrelacement du vocabulaire peut être corrélée avec un haut niveau de comparabilité et est robuste aux changements des entrées du dictionnaire. En s'appuyant sur cette mesure de comparabilité, deux méthodes nommées: greedy approach et clustering approach, sont alors développées afin d'améliorer la qualité d'un corpus comparable donnée. L'idée générale de ces deux méthodes est de choisir une sous partie du corpus original qui soit de haute qualité, et d'enrichir la sous-partie de qualité moindre avec des ressources externes. Les expériences montrent que l'on peut améliorer avec ces deux méthodes la qualité en termes de score de comparabilité d'un corpus comparable donnée, avec la méthode clustering approach qui est plus efficace que la method greedy approach. Le corpus comparable ainsi obtenu, permet d'augmenter la qualité des lexiques bilingues en utilisant l'algorithme d'extraction standard. Enfin, nous nous penchons sur la tâche d'extraction d'information interlingue (Cross-Language Information Retrieval, CLIR) et l'application des corpus comparables à cette tâche. Nous développons de nouveaux modèles CLIR en étendant les récents modèles proposés en recherche d'information monolingue. Le modèle CLIR montre de meilleurs performances globales. Les lexiques bilingues extraits à partir des corpus comparables sont alors combinés avec le dictionnaire bilingue existant, est utilisé dans les expériences CLIR, ce qui induit une amélioration significative des systèmes CLIR
Bilingual corpora are an essential resource used to cross the language barrier in multilingual Natural Language Processing (NLP) tasks. Most of the current work makes use of parallel corpora that are mainly available for major languages and constrained areas. Comparable corpora, text collections comprised of documents covering overlapping information, are however less expensive to obtain in high volume. Previous work has shown that using comparable corpora is beneficent for several NLP tasks. Apart from those studies, we will try in this thesis to improve the quality of comparable corpora so as to improve the performance of applications exploiting them. The idea is advantageous since it can work with any existing method making use of comparable corpora. We first discuss in the thesis the notion of comparability inspired from the usage experience of bilingual corpora. The notion motivates several implementations of the comparability measure under the probabilistic framework, as well as a methodology to evaluate the ability of comparability measures to capture gold-standard comparability levels. The comparability measures are also examined in terms of robustness to dictionary changes. The experiments show that a symmetric measure relying on vocabulary overlapping can correlate very well with gold-standard comparability levels and is robust to dictionary changes. Based on the comparability measure, two methods, namely the greedy approach and the clustering approach, are then developed to improve the quality of any given comparable corpus. The general idea of these two methods is to choose the highquality subpart from the original corpus and to enrich the low-quality subpart with external resources. The experiments show that one can improve the quality, in terms of comparability scores, of the given comparable corpus by these two methods, with the clustering approach being more efficient than the greedy approach. The enhanced comparable corpus further results in better bilingual lexicons extracted with the standard extraction algorithm. Lastly, we investigate the task of Cross-Language Information Retrieval (CLIR) and the application of comparable corpora in CLIR. We develop novel CLIR models extending the recently proposed information-based models in monolingual IR. The information-based CLIR model is shown to give the best performance overall. Bilingual lexicons extracted from comparable corpora are then combined with the existing bilingual dictionary and used in CLIR experiments, which results in significant improvement of the CLIR system
APA, Harvard, Vancouver, ISO, and other styles
46

Feldman, Anna. "Portable language technology a resource-light approach to morpho-syntactic tagging /." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1153344391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Magableh, Murad. "A generic architecture for semantic enhanced tagging systems." Thesis, De Montfort University, 2011. http://hdl.handle.net/2086/5172.

Full text
Abstract:
The Social Web, or Web 2.0, has recently gained popularity because of its low cost and ease of use. Social tagging sites (e.g. Flickr and YouTube) offer new principles for end-users to publish and classify their content (data). Tagging systems contain free-keywords (tags) generated by end-users to annotate and categorise data. Lack of semantics is the main drawback in social tagging due to the use of unstructured vocabulary. Therefore, tagging systems suffer from shortcomings such as low precision, lack of collocation, synonymy, multilinguality, and use of shorthands. Consequently, relevant contents are not visible, and thus not retrievable while searching in tag-based systems. On the other hand, the Semantic Web, so-called Web 3.0, provides a rich semantic infrastructure. Ontologies are the key enabling technology for the Semantic Web. Ontologies can be integrated with the Social Web to overcome the lack of semantics in tagging systems. In the work presented in this thesis, we build an architecture to address a number of tagging systems drawbacks. In particular, we make use of the controlled vocabularies presented by ontologies to improve the information retrieval in tag-based systems. Based on the tags provided by the end-users, we introduce the idea of adding “system tags” from semantic, as well as social, resources. The “system tags” are comprehensive and wide-ranging in comparison with the limited “user tags”. The system tags are used to fill the gap between the user tags and the search terms used for searching in the tag-based systems. We restricted the scope of our work to tackle the following tagging systems shortcomings: - The lack of semantic relations between user tags and search terms (e.g. synonymy, hypernymy), - The lack of translation mediums between user tags and search terms (multilinguality), - The lack of context to define the emergent shorthand writing user tags. To address the first shortcoming, we use the WordNet ontology as a semantic lingual resource from where system tags are extracted. For the second shortcoming, we use the MultiWordNet ontology to recognise the cross-languages linkages between different languages. Finally, to address the third shortcoming, we use tag clusters that are obtained from the Social Web to create a context for defining the meaning of shorthand writing tags. A prototype for our architecture was implemented. In the prototype system, we built our own database to host videos that we imported from real tag-based system (YouTube). The user tags associated with these videos were also imported and stored in the database. For each user tag, our algorithm adds a number of system tags that came from either semantic ontologies (WordNet or MultiWordNet), or from tag clusters that are imported from the Flickr website. Therefore, each system tag added to annotate the imported videos has a relationship with one of the user tags on that video. The relationship might be one of the following: synonymy, hypernymy, similar term, related term, translation, or clustering relation. To evaluate the suitability of our proposed system tags, we developed an online environment where participants submit search terms and retrieve two groups of videos to be evaluated. Each group is produced from one distinct type of tags; user tags or system tags. The videos in the two groups are produced from the same database and are evaluated by the same participants in order to have a consistent and reliable evaluation. Since the user tags are used nowadays for searching the real tag-based systems, we consider its efficiency as a criterion (reference) to which we compare the efficiency of the new system tags. In order to compare the relevancy between the search terms and each group of retrieved videos, we carried out a statistical approach. According to Wilcoxon Signed-Rank test, there was no significant difference between using either system tags or user tags. The findings revealed that the use of the system tags in the search is as efficient as the use of the user tags; both types of tags produce different results, but at the same level of relevance to the submitted search terms.
APA, Harvard, Vancouver, ISO, and other styles
48

Mayr, Philipp. "Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in digitalen Bibliotheken." Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2009. http://dx.doi.org/10.18452/15906.

Full text
Abstract:
Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
In spite of huge document sets for cross-database literature searches, academic users expect a high ratio of relevant and qualitative documents in result sets. It is particularly the order and structure of the listed results (ranking) that play an important role when designing search systems alongside the direct full text access for documents. Users also expect flexible information systems which allow influencing the ranking of documents and application of alternative ranking techniques. This thesis proposes two value-added approaches for search systems which treat typical problems in searching scientific literature and seek to improve the retrieval situation on a measurable level. The two value-added services, semantic treatment of heterogeneity (the example of cross-concordances) and re-ranking on Bradfordizing, which are applied in different search phases, are described in detail and their effectiveness in typical subject-specific searches is evaluated in the empirical part of the thesis. The preeminent goal of the thesis is to study if the proposed, alternative re-ranking approach Bradfordizing is operable in the domain of bibliographic databases, and if the approach is profitable, i.e. serves as a value added, for users in information systems. We used topics and data from two evaluation projects (CLEF and KoMoHe) for the tests. The intellectually assessed documents come from seven academic abstracting and indexing databases representing social science, political science, economics, psychology and medicine. The evaluation of the cross-concordances (82 topics altogether) shows that the retrieval results improve significantly for all cross-concordances, indicating that interdisciplinary cross-concordances have the strongest (positive) effect on the search results. The evaluation of Bradfordizing re-ranking (164 topics altogether) shows that core zone (core journals) documents display significantly higher precision than was seen for documents in zone 2 and zone 3 (periphery journals) for most test series. This post-Bradfordizing relevance advantage can be demonstrated empirically across a very broad basis of topics and two independent document corpora as well for journals and monographs.
APA, Harvard, Vancouver, ISO, and other styles
49

Kralisch, Anett. "The impact of culture and language on the use of the internet." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2006. http://dx.doi.org/10.18452/15501.

Full text
Abstract:
Diese Arbeit untersucht den Einfluss von Kultur und Sprache auf die Nutzung des Internets. Drei Hauptgebiete wurden bearbeitet: (1) Der Einfluss von Kultur und Sprache auf Nutzerpräferenzen bezüglich der Darstellung von Informationen und Nutzung von Suchoptionen; (2) Der Einfluss von Kultur auf Nutzerpräferenzen bezüglich des Inhaltes von Websiteinformationen; (3) Der Einfluss von Sprache auf die Nutzerzufriedenheit und Sprache als Informationszugangsbarriere Daten aus Logfile-Analysen, Onlinebefragungen und experimentellen Untersuchungen bildeten die Auswertungsgrundlage für die Überprüfung der 33 Hypothesen. Die Ergebnisse zeigen, dass kulturspezifische Denkmuster mit Navigationsmusters und Nutzung von Suchoptionen korrelieren. Der Einfluss von Kultur auf Nutzerpräferenzen bezüglich des Inhaltes von Websiteinformationen erwies sich als weniger eindeutig. Aus den Untersuchungen zum Einfluss von Sprache ging hervor, dass Sprache Web¬sitezugriff und –nutzung beeinflusst. Die Daten zeigen, dass signifikant weniger L1-Nutzer als L2-Nutzer auf eine Website zugreifen. Dies lässt sich zum einem mit dem sprachbedingten kognitiven Aufwand erklären als auch mit der Tatsache, dass Websites unterschiedlicher Sprachen weniger miteinander verlinkt sind als Websites gleicher Sprachen. Im Hinblick auf die Nutzung von Suchoptionen zeigte sich, dass L2 Nutzer mit geringem themenspezifischen Wissen sich signifikant von L1 Nutzern unterscheiden. Schließlich lassen die Ergebnisse auch darauf schließen, dass Zufriedenheit der Nutzer einer Website einerseits mit Sprachfähigkeiten der Nutzer und andererseits mit der wahrgenommenen Menge muttersprachlichen Angebots im Internet korreliert.
This thesis analyses the impact of culture and language on Internet use. Three main areas were investigated: (1) the impact of culture and language on preferences for information presentation and search options, (2) the impact of culture on the need for specific website content, and (3) language as a barrier to information access and as a determinant of website satisfaction. In order to test the 33 hypotheses, data was gathered by means of logfile analyses, online surveys, and laboratory studies. It was concluded that culture clearly correlated with patterns of navigation behaviour and the use of search options. In contrast, results concerning the impact of culture on the need for website content were less conclusive. Results concerning language, showed that significantly fewer L1 users than L2 users accessed a website. This can be explained with language related cognitive effort as well as with the fact the websites of different languages are less linked than websites of the same language. With regard to search option use, a strong mediation effect of domain knowledge was found. Furthermore, results revealed correlations between user satisfaction and language proficiency, as well as between satisfaction and the perceived amount of native language information online.
APA, Harvard, Vancouver, ISO, and other styles
50

Kubalík, Jakub. "Mining of Textual Data from the Web for Speech Recognition." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237170.

Full text
Abstract:
Prvotním cílem tohoto projektu bylo prostudovat problematiku jazykového modelování pro rozpoznávání řeči a techniky pro získávání textových dat z Webu. Text představuje základní techniky rozpoznávání řeči a detailněji popisuje jazykové modely založené na statistických metodách. Zvláště se práce zabývá kriterii pro vyhodnocení kvality jazykových modelů a systémů pro rozpoznávání řeči. Text dále popisuje modely a techniky dolování dat, zvláště vyhledávání informací. Dále jsou představeny problémy spojené se získávání dat z webu, a v kontrastu s tím je představen vyhledávač Google. Součástí projektu byl návrh a implementace systému pro získávání textu z webu, jehož detailnímu popisu je věnována náležitá pozornost. Nicméně, hlavním cílem práce bylo ověřit, zda data získaná z Webu mohou mít nějaký přínos pro rozpoznávání řeči. Popsané techniky se tak snaží najít optimální způsob, jak data získaná z Webu použít pro zlepšení ukázkových jazykových modelů, ale i modelů nasazených v reálných rozpoznávacích systémech.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography