Dissertations / Theses on the topic 'Information retrieval Methodology'

To see the other types of publications on this topic, follow the link: Information retrieval Methodology.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 dissertations / theses for your research on the topic 'Information retrieval Methodology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yeung, Chung Kei. "Ontological model for information systems development methodology." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fraser, Mark E. "Architecture and methodology for storage, retrieval and presentation of geo-spatial information." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000316.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains xi, 77 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Ziesmer, Daniel J. "Developing a methodology for creating flexible instructional information technology laboratories." [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/DZiesmerPartI2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Muthaiyah, Saravanan. "A framework and methodology for ontology mediation through semantic and syntactic mapping." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3070.

Full text
Abstract:
Thesis (Ph. D.)--George Mason University, 2008.
Vita: p. 177. Thesis director: Larry Kerschberg. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 3, 2008). Includes bibliographical references (p. 169-176). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
5

Järvelin, Kalervo. "User charge estimation in numeric online databases a methodology /." Tampere : University of Tampere, 1986. http://catalog.hathitrust.org/api/volumes/oclc/18665006.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Fei. "Adaptive search in consumer-generated content environment: an information foraging perspective." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/326.

Full text
Abstract:
Inefficiencies associated with online information search are becoming increasingly prevalent in digital environments due to a surge in Consumer Generated Content (CGC). Despite growing scholarly interest in investigating users' information search behavior in CGC environments, there is a paucity of studies that explores the phenomenon from a theory-guided angle. Drawing on Information Foraging Theory (IFT), we re-conceptualize online information search as a form of adaptive user behavior in response to system design constraints. Through this theoretical lens, we advance separate taxonomies for online information search tactics and strategies, both of which constitute essential building blocks of the search process. Furthermore, we construct a research framework that bridges the gap between online information search tactics and strategies by articulating how technology-enabled search tactics contribute to the fulfillment of strategic search goals. We validate our research framework via an online experiment by recruiting participants from Amazon Mechanical Turk (AMT). Participants were tasked to perform searches on custom-developed online review websites, which were modeled after a popular online review website and populated with real restaurant review data. Empirical findings reveal that the provision of different search features indeed engenders distinct search tactics, thereby allowing users varying levels of search determination control and search manipulation control. In turn, both types of search controls affects users' result anticipation and search costs, which when combined, determine the efficiency of goal-oriented search strategy and the utility of exploratory search strategy. This study provides valuable insights that can guide future research and practice.
APA, Harvard, Vancouver, ISO, and other styles
7

Hamilton, John, Ronald Fernandes, Timothy Darr, Michael Graul, Charles Jones, and Annette Weisenseel. "A Model-Based Methodology for Managing T&E Metadata." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606019.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
In this paper, we present a methodology for managing diverse sources of T&E metadata. Central to this methodology is the development of a T&E Metadata Reference Model, which serves as the standard model for T&E metadata types, their proper names, and their relationships to each other. We describe how this reference model can be mapped to a range's own T&E data and process models to provide a standardized view into each organization's custom metadata sources and procedures. Finally, we present an architecture that uses these models and mappings to support cross-system metadata management tasks and makes these capabilities accessible across the network through a single portal interface.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Hai. "A customized semantic service retrieval methodology for the digital ecosystems environment." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/2345.

Full text
Abstract:
With the emergence of the Web and its pervasive intrusion on individuals, organizations, businesses etc., people now realize that they are living in a digital environment analogous to the ecological ecosystem. Consequently, no individual or organization can ignore the huge impact of the Web on social well-being, growth and prosperity, or the changes that it has brought about to the world economy, transforming it from a self-contained, isolated, and static environment to an open, connected, dynamic environment. Recently, the European Union initiated a research vision in relation to this ubiquitous digital environment, known as Digital (Business) Ecosystems. In the Digital Ecosystems environment, there exist ubiquitous and heterogeneous species, and ubiquitous, heterogeneous, context-dependent and dynamic services provided or requested by species. Nevertheless, existing commercial search engines lack sufficient semantic supports, which cannot be employed to disambiguate user queries and cannot provide trustworthy and reliable service retrieval. Furthermore, current semantic service retrieval research focuses on service retrieval in the Web service field, which cannot provide requested service retrieval functions that take into account the features of Digital Ecosystem services. Hence, in this thesis, we propose a customized semantic service retrieval methodology, enabling trustworthy and reliable service retrieval in the Digital Ecosystems environment, by considering the heterogeneous, context-dependent and dynamic nature of services and the heterogeneous and dynamic nature of service providers and service requesters in Digital Ecosystems.The customized semantic service retrieval methodology comprises: 1) a service information discovery, annotation and classification methodology; 2) a service retrieval methodology; 3) a service concept recommendation methodology; 4) a quality of service (QoS) evaluation and service ranking methodology; and 5) a service domain knowledge updating, and service-provider-based Service Description Entity (SDE) metadata publishing, maintenance and classification methodology.The service information discovery, annotation and classification methodology is designed for discovering ubiquitous service information from the Web, annotating the discovered service information with ontology mark-up languages, and classifying the annotated service information by means of specific service domain knowledge, taking into account the heterogeneous and context-dependent nature of Digital Ecosystem services and the heterogeneous nature of service providers. The methodology is realized by the prototype of a Semantic Crawler, the aim of which is to discover service advertisements and service provider profiles from webpages, and annotating the information with service domain ontologies.The service retrieval methodology enables service requesters to precisely retrieve the annotated service information, taking into account the heterogeneous nature of Digital Ecosystem service requesters. The methodology is presented by the prototype of a Service Search Engine. Since service requesters can be divided according to the group which has relevant knowledge with regard to their service requests, and the group which does not have relevant knowledge with regard to their service requests, we respectively provide two different service retrieval modules. The module for the first group enables service requesters to directly retrieve service information by querying its attributes. The module for the second group enables service requesters to interact with the search engine to denote their queries by means of service domain knowledge, and then retrieve service information based on the denoted queries.The service concept recommendation methodology concerns the issue of incomplete or incorrect queries. The methodology enables the search engine to recommend relevant concepts to service requesters, once they find that the service concepts eventually selected cannot be used to denote their service requests. We premise that there is some extent of overlap between the selected concepts and the concepts denoting service requests, as a result of the impact of service requesters’ understandings of service requests on the selected concepts by a series of human-computer interactions. Therefore, a semantic similarity model is designed that seeks semantically similar concepts based on selected concepts.The QoS evaluation and service ranking methodology is proposed to allow service requesters to evaluate the trustworthiness of a service advertisement and rank retrieved service advertisements based on their QoS values, taking into account the contextdependent nature of services in Digital Ecosystems. The core of this methodology is an extended CCCI (Correlation of Interaction, Correlation of Criterion, Clarity of Criterion, and Importance of Criterion) metrics, which allows a service requester to evaluate the performance of a service provider in a service transaction based on QoS evaluation criteria in a specific service domain. The evaluation result is then incorporated with the previous results to produce the eventual QoS value of the service advertisement in a service domain. Service requesters can rank service advertisements by considering their QoS values under each criterion in a service domain.The methodology for service domain knowledge updating, service-provider-based SDE metadata publishing, maintenance, and classification is initiated to allow: 1) knowledge users to update service domain ontologies employed in the service retrieval methodology, taking into account the dynamic nature of services in Digital Ecosystems; and 2) service providers to update their service profiles and manually annotate their published service advertisements by means of service domain knowledge, taking into account the dynamic nature of service providers in Digital Ecosystems. The methodology for service domain knowledge updating is realized by a voting system for any proposals for changes in service domain knowledge, and by assigning different weights to the votes of domain experts and normal users.In order to validate the customized semantic service retrieval methodology, we build a prototype – a Customized Semantic Service Search Engine. Based on the prototype, we test the mathematical algorithms involved in the methodology by a simulation approach and validate the proposed functions of the methodology by a functional testing approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Udoyen, Nsikan. "Information Modeling for Intent-based Retrieval of Parametric Finite Element Analysis Models." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14084.

Full text
Abstract:
Adaptive reuse of parametric finite element analysis (FEA) models is a common form of reuse that involves integrating new information into an archived FEA model to apply it towards a new similar physical problem. Adaptive reuse of archived FEA models is often motivated by the need to assess the impact of minor improvements to component-based designs such as addition of new structural components, or the need to assess new failure modes that arise when a device is redesigned for new operating environments or loading conditions. Successful adaptive reuse of FEA models involves reference to supporting documents that capture the formulation of the model to determine what new information can be integrated and how. However, FEA models and supporting documents are not stored in formats that are semantically rich enough to support automated inference of their relevance to a modelers needs. The modelers inability to precisely describe information needs and execute queries based on such requirements results in inefficient queries and time spent manually assessing irrelevant models. The central research question in this research is thus how do we incorporate a modelers intent into automated retrieval of FEA models for adaptive reuse? An automated retrieval method to support adaptive reuse of parametric FEA models has been developed in the research documented in this thesis. The method consists of a classification-based retrieval method based on ALE subsumption hierarchies that classify models using semantically rich description logic representations of physical problem structure and a reusability-based ranking method. Conceptual data models have been developed for the representations that support both retrieval and ranking of archived FEA models. The method is validated using representations of FEA models of several classes of electronic chip packages. Experimental results indicate that the properties of the representation methods support effective automation of retrieval functions for FEA models of component-based designs.
APA, Harvard, Vancouver, ISO, and other styles
10

Alazemi, Awatef M. "A new methodology for designing a multi-lingual bio-ontology : an application to Arabic-English bio-information retrieval." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26507/.

Full text
Abstract:
Ontologies are becoming increasingly important in the biomedical domain since they enable knowledge sharing in a formal, homogeneous and unambiguous way. Furthermore, biological discoveries are being reported at an extremely rapid rate. This new information is found in diverse resources that encompass a broad array of journal articles and public databases associated with different sub-disciplines within biology and medicine in different languages. However, finding relevant multilingual biological dedicated ontology to the digestive system ontology among a large collection of information is recognized as a critical knowledge gap in science. Consequently, this research argues the real need to highlight the area of ontology in a sense of searching in bio-lingual, representing concepts and inter-concept relationships. English-Arabic human digestive system ontology (DISUS) and its methodology were created to demonstrate the above notion. The approach adopted for this research involved creating a new integrated reengineered methodology for a novel first attempt multilingual (English-Arabic) bio-ontology for the purpose of information retrieval and knowledge discovery. The targeted DISUS ontology is to represent digestive system knowledge and to ease knowledge sharing among the end users in the biology and medicine context .The integrated generic methodology is constitutes of four phases the planning phase which shed light on the scope and purpose of the domain and the functioning of knowledge acquisition, the conceptualisation phase organizes unstructured knowledge to structured. The ontology construction which involves the integration and merging among the core and sub-ontologies. The evaluation phase which finalizes the whole work and this is executed by domain experts. Evaluation of multilingual DISUS carried out through qualitative and quantitative approaches with biological and medical experts, validation was utilized through information retrieval technique and has revealed the effectiveness and robustness of using DISUS ontology as a way for concept mapping between Arabic-English ontologies terms for bilingual searches.
APA, Harvard, Vancouver, ISO, and other styles
11

Macpherson, Karen, and n/a. "The development of enhanced information retrieval strategies in undergraduates through the application of learning theory: an experimental study." University of Canberra. Information Management & Tourism, 2002. http://erl.canberra.edu.au./public/adt-AUC20060405.130648.

Full text
Abstract:
In this thesis, teaching and learning issues involved in end-user information retrieval from electronic databases are examined. A two-stage model of the information retrieval process, based on information processing theory, is proposed; and a framework for the teaching of information literacy is developed. The efficacy of cognitive psychology as a theoretical framework that enhances the understanding of a number of information retrieval issues, is discussed. These issues include: teaching strategies that can assist the development of conceptual knowledge of the information retrieval process; individual differences affecting information retrieval performance, particularly problemsolving ability; and expert and novice differences in search performance. The researcher investigated the impact of concept-based instruction on the development of information retrieval skills through the use of a two-stage experimental study conducted with undergraduates students at the University of Canberra, Australia. Phase 1 was conducted with 254 first-year undergraduates in 1997, with a 40 minute concept-based teaching module as the independent variable. A number of research questions were proposed: 1. Wdl type of instruction influence acquisition of knowledge of electronic database searching? 2. Will type of instruction influence information retrieval effectiveness? 3. Are problem-solving ability and information retrieval effectiveness related? 4. Are problem-solving ability and cognitive maturity related? 5. Are there any differences in the search behaviour of more effective and less effective searchers? Subjects completed a pre-test which measured knowledge of electronic databases, and problem-solving ability; and a post-test that measured changes in these abilities. Subjects in the experimental treatment were taught the 40 minute concept-based module, which incorporated teaching strateges grounded in leaming theory. The strategies included: the use of analogy; modelling; and the introduction of complexity. The aims of the module were to foster the development of a realistic concept of the information retrieval process; and to provide a problem-solving heuristic to guide subjects in their search strategy formulation. All subjects completed two post-tests: a survey that measured knowledge of search terminology and strategies; and an information retrieval assignment that measured effectiveness of search design and execution. Results suggested that using a concept-based approach is significantly more effective than using a traditional, skills-demonstration approach in the teaching of information retrieval. This effectiveness was both in terms of increasing knowledge of the search process; and in terms of improving search outcomes. Further, results suggested that search strategy formulation is significantly correlated with electronic database knowledge, and problemsolving ability; and that problem-solving ability and level of cognitive maturity may be related. Results supported the two-stage model of the information retrieval process suggested by the researcher as one possible construct of the thinking processes underlying information retrieval. These findings led to the implementation of Phase 2 of the research in 1999. Subjects were 68 second-year undergraduate students at the University of Canberra. In this Phase, concept-based teaching techniques were used to develop four modules covering a range of information literacy skills, including: critical thinking; information retrieval strategies; evaluation of sources; and determining relevance of articles. Results confirmed that subjects taught by methods based on leaming theory paradigms (the experimental treatment group), were better able to design effective searches than subjects who did not receive such instruction (the control treatment group). Further, results suggested that these teaching methods encouraged experimental group subjects to locate material from more credible sources than did control group subjects. These findings are of particular significance, given the increasing use of the unregulated intemet environment as an information source. Taking into account literature reviewed, and the results of Phases 1 and 2, a model of the information retrieval process is proposed. Finally, recognising the central importance of the acquisition of information literacy to student success at university, and to productive membership of the information society, a detailed framework for the teaching of information literacy in higher education is suggested.
APA, Harvard, Vancouver, ISO, and other styles
12

Frie, Gudrun Louise. "Organizing, describing, analyzing, and retrieving the dissertation literature in special education : a case study using microcomputer technology to develop a personal information retrieval system." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28047.

Full text
Abstract:
This study analyzed special education dissertations published in Dissertation Abstracts International, 1980 to 1985. Keywords, describing the substantive content of each abstract and title, were assigned according to principles used in controlled and natural language indexing. A bibliometric analysis was performed to identify a core vocabulary representing frequent concepts and ideas and the most productive institutions awarding doctorates in special education. Descriptive and bivariate (chi square) analyses were also conducted illustrating relationships between demographic variables: year of completion, sex of author, degree awarded, page length, institution; and content variables: category of special education, research type, and data analysis technique. Finally, a microcomputer information retrieval system was developed to provide better access to the dissertation literature. Results indicated that a greater number of women choose to do doctoral work, graduate with Ph.D. degrees and write longer theses. The keyword index illustrated a wide diversity of topics being pursued. The microcomputer personal information retrieval system is multifaceted, is available for searching, may describe the vocabulary, and will accommodate the growing dissertation base in special education.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

Craft, Alastair. "The role of culture in music information retrieval : a model of negotiated musical meaning, and its implications in methodology and evaluation of the music genre classification task." Thesis, Goldsmiths College (University of London), 2008. http://research.gold.ac.uk/6660/.

Full text
Abstract:
This thesis proposes a new methodology for evaluation of automatic music genre classification. It is argued that the common tacit understanding that genre is an attribute of a piece is unsound, and that genre is better understood as an attribute given to a piece by a group of people. As a direct consequence of this, different groups will assign different genre labels to the same piece.
APA, Harvard, Vancouver, ISO, and other styles
14

Asprey, Leonard Gregory. "An extension to system development methodologies for successful production imaging systems." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Subasic, Anthony. "Proposition d'une méthodologie d'intégration des connaissances métiers dans l’interface homme-machine des applications dédiées à la recherche d'information." Thesis, Reims, 2013. http://www.theses.fr/2013REIMS049.

Full text
Abstract:
Notre objectif, tout au long de ce manuscrit, est de proposer une méthodologie d'analyse, de conception et d'évaluation d'applications dédiées à la recherche d'information orientée métier. Le premier chapitre commence par un état de l'art sur la modélisation de la recherche d'information. Dans un second temps, il traite la notion d'interaction homme machine présente au sein de systèmes de recherche d'information. Ce chapitre permet de souligner que le contexte métier n'est pas pris en compte dans la modélisation de la recherche d'information via un système informatisé. Le deuxième chapitre propose une méthodologie d'analyse, de conception et d'évaluation d'Applications dédiées à la Recherche d'information Orientée Métier (AROM). Il traite également de l'intégration du contexte métier via la notion d'ontologie au sein d'un système dédié. Enfin, une confrontation avec un système existant est réalisée dans le but de valider la méthodologie en termes d'évaluation. Le troisième chapitre permet de valider la méthodologie AROM dans la conception d'applications via deux cas d'étude industriels. Nous montrons comment le premier cas d'étude nous a permis d'avoir une réflexion sur le développement de la méthodologie. Grâce à la méthodologie AROM, le second cas d'étude propose une interface innovante en termes de recherche d'information orientée métier
Our goal is to develop a methodology to analyse, to implement and to evaluate user-trade oriented information retrieval applications.The first chapter begins with a state of the art about information retrieval models. It presents human computer interaction of informatiion retrieval systems. It helps to understand that user trade context has to be dealt of in information retrieval model and especially during the design of dedicated applications.The second chapter is the core of our methodology to analyse, to implement and to evaluate user-trade oriented information retrieval applications. This methodology is called AROM. The chapter explains how to take into account user trade context by integrating an ontology in the dedicated system. The AROM methodoly is compared with an existing system in order to its validation.The third chapter validates the AROM methodology during the design of two industrial case studies. The first case study has been designed during the first steps of the design of the AROM methodology. It helped us understand which concepts were necessary to our methodology. Thanks to our methodology, the second case study has an innovative interface through a user trade oriented information retrieval tool
APA, Harvard, Vancouver, ISO, and other styles
16

Boccato, Vera Regina Casari. "Avaliação do uso de linguagem documentária em catálogos coletivos de bibliotecas universitárias : um estudo sociocognitivo com protocolo verbal /." Marília : [s.n.], 2009. http://hdl.handle.net/11449/103373.

Full text
Abstract:
Orientador: Mariângela Spotti Lopes Fujita
Banca: Isidoro Gil Leiva
Banca: Maria Cristiane Barbosa Galvão
Banca: Maria de Fátima Gonçalves Moreira Tálamo
Banca: Plácida Leopoldina Ventura Amorim da Costa Santos
Resumo: A linguagem documentária desempenha um papel fundamental na indexação e recuperação da informação. Quando a linguagem documentária não corresponde às necessidades de representação dos conteúdos dos documentos, realizada pelos bibliotecários indexadores e das solicitações de buscas bibliográficas por assunto dos usuários, afeta a atuação desses processos, comprometendo a realização de buscas e serviços. Realizou-se, como proposta, um estudo de avaliação do uso de linguagem documentária alfabética de catálogos coletivos online, com enfoque nas tecnologias de representação e recuperação da informação, na perspectiva das bibliotecas universitárias e no contexto sociocognitivo de bibliotecários indexadores e usuários. Com o objetivo geral de contribuir para o uso adequado de linguagens documentárias alfabéticas nos processos de indexação e recuperação da informação de áreas científicas especializadas em catálogos coletivos de bibliotecas universitárias e, deste modo, colaborar com o processo de mudanças contínuas nos fazeres bibliotecários e, consequentemente, nos de sua comunidade usuária, a pesquisa teve como objetivos específicos: discutir o papel das linguagens documentárias alfabéticas na concepção dos catálogos coletivos pela perspectiva dos catálogos online; apresentar e discutir os estudos de avaliação de linguagens documentárias pelas abordagens quantitativas, qualitativas e qualitativas-cognitivas como métodos de avaliação, subsidiados pelos fundamentos teóricos e metodológicos da área de Organização e Representação do Conhecimento, frente aos paradigmas contemporâneos da área de Ciência da Informação; e investigar a aplicação da metodologia qualitativa de abordagem sociocognitiva mediante Protocolo Verbal para estudo de avaliação do uso de linguagem documentária alfabética de catálogos coletivos... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The indexing language plays a fundamental role in the indexing and information retrieval. When the indexing language does not correspond to the necessities of representation of the contents of the documents, carried out by the indexers and the requests of bibliographical searches through the users' subject, it affects the performance of those processes, compromising the accomplishment of searches and services. The proposal is to carry through an evaluation study of the alphabetic indexing language use of the online collective catalogs, with a main focus on the technologies of representation and information retrieval, in the perspective of the university libraries and in the socio-cognitive context of indexers and users. With the general objective of to contribute for the adequate use of the alphabetical indexing languages in the indexing and information retrieval processes of specialized scientific areas in collective catalogs of the university libraries and thus, to collaborate with the process of continuous changes in the librarians' practice and, consequently, of its using community, the research had as specific objectives: arguing the role of the alphabetical indexing languages in the conception of the collective catalogs through the perspective of the online catalogs; presenting and arguing about the indexing languages evaluation studies through the quantitave, qualitative and qualitative-cognitive approaches as evaluation methods, which are supported by the theoretical and methodological fundamentals of the Organization and Knowledge Representation area, coping with the contemporary paradigms of the Information Science area ; and investigating the application of the socio-cognitive approach by Verbal Protocol for an evaluation study of the alphabetical indexing language use of the collective catalogs in specialized scientific areas in the perspective of the university libraries... (Complete abstract click electronic access below)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
17

Boccato, Vera Regina Casari [UNESP]. "Avaliação do uso de linguagem documentária em catálogos coletivos de bibliotecas universitárias: um estudo sociocognitivo com protocolo verbal." Universidade Estadual Paulista (UNESP), 2009. http://hdl.handle.net/11449/103373.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:32:42Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-07-01Bitstream added on 2014-06-13T19:43:18Z : No. of bitstreams: 1 boccato_vrc_dr_mar.pdf: 2023984 bytes, checksum: 31a69535f32213cfa99e5d321ba0c9cc (MD5)
Ufscar
A linguagem documentária desempenha um papel fundamental na indexação e recuperação da informação. Quando a linguagem documentária não corresponde às necessidades de representação dos conteúdos dos documentos, realizada pelos bibliotecários indexadores e das solicitações de buscas bibliográficas por assunto dos usuários, afeta a atuação desses processos, comprometendo a realização de buscas e serviços. Realizou-se, como proposta, um estudo de avaliação do uso de linguagem documentária alfabética de catálogos coletivos online, com enfoque nas tecnologias de representação e recuperação da informação, na perspectiva das bibliotecas universitárias e no contexto sociocognitivo de bibliotecários indexadores e usuários. Com o objetivo geral de contribuir para o uso adequado de linguagens documentárias alfabéticas nos processos de indexação e recuperação da informação de áreas científicas especializadas em catálogos coletivos de bibliotecas universitárias e, deste modo, colaborar com o processo de mudanças contínuas nos fazeres bibliotecários e, consequentemente, nos de sua comunidade usuária, a pesquisa teve como objetivos específicos: discutir o papel das linguagens documentárias alfabéticas na concepção dos catálogos coletivos pela perspectiva dos catálogos online; apresentar e discutir os estudos de avaliação de linguagens documentárias pelas abordagens quantitativas, qualitativas e qualitativas-cognitivas como métodos de avaliação, subsidiados pelos fundamentos teóricos e metodológicos da área de Organização e Representação do Conhecimento, frente aos paradigmas contemporâneos da área de Ciência da Informação; e investigar a aplicação da metodologia qualitativa de abordagem sociocognitiva mediante Protocolo Verbal para estudo de avaliação do uso de linguagem documentária alfabética de catálogos coletivos...
The indexing language plays a fundamental role in the indexing and information retrieval. When the indexing language does not correspond to the necessities of representation of the contents of the documents, carried out by the indexers and the requests of bibliographical searches through the users’ subject, it affects the performance of those processes, compromising the accomplishment of searches and services. The proposal is to carry through an evaluation study of the alphabetic indexing language use of the online collective catalogs, with a main focus on the technologies of representation and information retrieval, in the perspective of the university libraries and in the socio-cognitive context of indexers and users. With the general objective of to contribute for the adequate use of the alphabetical indexing languages in the indexing and information retrieval processes of specialized scientific areas in collective catalogs of the university libraries and thus, to collaborate with the process of continuous changes in the librarians’ practice and, consequently, of its using community, the research had as specific objectives: arguing the role of the alphabetical indexing languages in the conception of the collective catalogs through the perspective of the online catalogs; presenting and arguing about the indexing languages evaluation studies through the quantitave, qualitative and qualitative-cognitive approaches as evaluation methods, which are supported by the theoretical and methodological fundamentals of the Organization and Knowledge Representation area, coping with the contemporary paradigms of the Information Science area ; and investigating the application of the socio-cognitive approach by Verbal Protocol for an evaluation study of the alphabetical indexing language use of the collective catalogs in specialized scientific areas in the perspective of the university libraries... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
18

Oliveira, Greissi Gomes. "Parâmetros sociocognitivos de construção de instrumento de representação temática da informação de áreas técnicocientíficas." Universidade Federal de São Carlos, 2013. https://repositorio.ufscar.br/handle/ufscar/1111.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:16:36Z (GMT). No. of bitstreams: 1 5000.pdf: 2828821 bytes, checksum: 085b98bac2f728cd163a1abb8d9d9385 (MD5) Previous issue date: 2013-02-18
Information retrieval systems for information retrieval can occur through the search by author name, title of the work, by text words and through the theme or subject of a work. For accuracy in information search by subject, it becomes essential to use structured languages, called indexing languages, which are instruments with a view to enabling represent the contents of the collection. Thus, the theme of our research is to identify construction parameters of an instrument subject representation for information retrieval by subject in units of scientific and technical information (USTI). In this study, the units of scientific and technical information correspond to the libraries of the Federal Institute of Education, Science and Technology of São Paulo. Our research problem is characterized by the absence of social cognitive parameters for the construction of an instrument of representation of thematic information of technical and scientific areas. Identify proposed construction parameters of an instrument of thematic representation of information on the technical and scientific literature in the area of Knowledge Representation and Organization in the context of social cognitive librarian and user and the prospect of the units of scientific and technical information feds. Our overall objective was to present parameters for the construction of social cognitive instrument thematic representation of information techno-scientific areas. The specific objectives were: 1. identify the interdisciplinary Science, Technology and Society, Information Science, with emphasis on the Organization of Knowledge and Cognitive Science, to establish a collaborative dialogue in building a tool for thematic representation of information 2. present on the documentary languages, viewed as representation language for subject areas of technical and scientific information retrieval systems; 3. identify methods of construction of alphabetical indexing languages, given the technical and scientific literature in the area of Knowledge Representation and Organization in Information Science, 4. describe the context of social cognitive librarian and user units of scientific and technical information feds 5. the views of librarians and users of information units on federal technical-scientific parameters in the collaborative construction of an instrument subject representation in Science, Technology and Education, from the application of verbal protocol viewed as a qualitative methodology with sociocognitive approach. Our research is justified by the need for an indexing language for representation and retrieval of information in units of scientific and technical information to enable the correct representation of information by librarians in indexing activity and access to information for users seeking quality and specificity in recovery the information. As methodology we conducted bibliographic research on thematic Science, Technology, Society, Information Science, Cognitive Science, Knowledge Organization, Documentary Languages, Intelligence Units, Thesaurus, Federal Institute of Education. Subsequently, we apply the Protocol Verbal Group (PVG), knowing opinions of librarians and users on indicators for the construction of the instrument. The results were analyzed from grants acquired by literature accompanied by statements of participants and enabled PVG Eleven parameters indicate the collaborative construction of an instrument subject representation: 1. characterization of the user profile (target) that will make use of language learners and teachers of upper-level courses and medium 2. terms must serve the needs of representation and retrieval of information (guarantees and literary usage) 3. terms must originate in natural language and specialty (fair usage and literary) 4. terms should represent the vocabulary usage of the organization (organizational guarantee) 5. language must have both generic and specific terms; 6. language should promote control of synonyms; 7. language must identify the homonymous accordance with the use of qualifiers 8. establishment of logical-semantic relationships between terms of orders hierarchical, associative and equivalency; 9. inclusion of terms of scope notes when needed; 10. assignment of terms must be contemplating the balance between comprehensiveness and specificity achieved by the information retrieval system; 11. identification / building information retrieval system (catalog) that includes also factors such as being available online, offering reservation services online and renewal; allow viewing of information such as: cover, table of contents, introduction and full text materials contained in the collection of USTI; possess and to enable a feature suggestion terms, the timing of the search, both to fix the search expression as for the storage of subjects / search terms. This feature is important in terms of collection, aimed at updating process of language also from the user's perspective; provide and ensure accessibility of the language so that the librarian can realize the representation of information with it and from it, and allow available the accessibility of the language so that the user can perform a search by subject, for recovering useful information with it and from it. We believe that the diversity of the public pointed verbal protocols with respect to factors such as age, level of education (knowledge) and different areas of expertise, which makes us think and recommend to the IFSP USTI the construction and use of an indexing language with vocabulary arising from natural language and specialty (as in a thesaurus) but with the logical structure semantics between terms / headers from a list of subject headings, covering scope also notes that become necessary.
A recuperação da informação em sistemas de recuperação da informação pode dar-se através da busca por nome do autor, por título da obra, por palavras do texto e através do tema ou assunto de uma obra. Para a precisão na busca da informação por assunto, torna-se fundamental o uso de linguagens estruturadas, denominadas linguagens documentárias, que são instrumentos com vistas a permitir representar os conteúdos do acervo de uma unidade de informação. Dessa forma, o tema de nossa pesquisa é a identificação de parâmetros de construção de um instrumento de representação temática para recuperação de informação por assunto em unidades de informação técnico-científicas (UITC). Nesta pesquisa, as unidades de informação técnico-científicas correspondem às bibliotecas do Instituto Federal de Educação, Ciência e Tecnologia de São Paulo. Nosso problema de pesquisa caracteriza-se pela ausência de parâmetros sociocognitivos para a construção de instrumento de representação temática da informação de áreas técnico-científicas. Propusemos identificar parâmetros de construção de um instrumento de representação temática da informação, diante das literaturas técnica e científica na área de Organização e Representação do Conhecimento, no contexto sociocognitivo do bibliotecário e do usuário e pela perspectiva das unidades de informação técnico-científicas federais. Nosso objetivo geral foi apresentar parâmetros sociocognitivos para a construção de instrumento de representação temática da informação de áreas técnico-científicas. Os objetivos específicos foram: 1. identificar a interdisciplinaridade entre a Ciência, Tecnologia e Sociedade, Ciência da Informação, com destaque para a Organização do Conhecimento e Ciência Cognitiva, visando estabelecer um diálogo colaborativo na construção de um instrumento de representação temática da informação; 2. apresentar sobre as linguagens documentárias, vistas como linguagem de representação por assunto de áreas técnico-científicas e de sistemas de recuperação da informação; 3. identificar os métodos de construção de linguagens documentárias alfabéticas, diante das literaturas técnica e científica na área de Organização e Representação do Conhecimento em Ciência da Informação; 4. descrever o contexto sociocognitivo do bibliotecário e do usuário de unidades de informação técnico-científicas federais; 5. conhecer as opiniões de bibliotecários e usuários de unidades de informação técnico-científicas federais sobre parâmetros colaborativos na construção de um instrumento de representação temática em Ciência, Tecnologia e Educação, a partir da aplicação do protocolo verbal visto como uma metodologia qualitativa com abordagem sociocognitiva. Nossa pesquisa justificou-se pela necessidade de uma linguagem documentária para representação e recuperação da informação em unidades de informação técnico-científicas que permitam a correta representação da informação por bibliotecários na atividade de indexação e o acesso a informação por usuários que buscam especificidade e qualidade na recuperação da informação. Como metodologia realizamos pesquisa bibliográfica sobre temáticas Ciência, Tecnologia, Sociedade, Ciência da Informação, Ciência Cognitiva, Organização do Conhecimento, Linguagens Documentárias, Unidades de Informação, Tesauros, Instituto Federal de Educação. Posteriormente, aplicamos o Protocolo Verbal em Grupo (PVG), conhecendo opiniões de bibliotecários e usuários acerca dos indicadores para a construção do instrumento. Os resultados foram analisados a partir dos subsídios adquiridos pela literatura acompanhados das declarações dos participantes do PVG e possibilitaram indicar onze parâmetros colaborativos na construção de um instrumento de representação temática: 1. caracterização do perfil do usuário (público alvo) que fará uso da linguagem: discentes e docentes de cursos de nível superior e médio; 2. termos devem atender as necessidades de representação e recuperação da informação (garantias literária e de uso); 3. termos devem ter origem na linguagem natural e de especialidade (garantias de uso e literária); 4. termos devem representar o vocabulário de uso da organização (garantia organizacional); 5. a linguagem deve possuir tanto termos genéricos quanto específicos; 6. a linguagem deve promover o controle de sinônimos; 7. a linguagem deve identificar a homonímia com o uso de termos qualificadores; 8. estabelecimento de relações lógicosemânticas entre os termos de ordens hierárquica, equivalência e associativa; 9. inclusão de notas de escopo dos termos, quando necessário; 10. atribuição de termos deve ser contemplar o equilíbrio entre a exaustividade e a especificidade alcançada pelo sistema de recuperação da informação; 11. identificação/construção de sistema de recuperação da informação (catálogo) que contemple, também, fatores, tais como: estar disponíveis online; oferecer os serviços de reservas e renovação online; permitir a visualização de informações como: capa, sumário, introdução e texto completo de materiais constantes do acervo das UITCs; possuir e ativar um recurso para sugestão de termos, na momento de realização da busca, tanto para correção da expressão de busca quanto para o armazenamento dos assuntos/termos procurados. Tal recurso é importante na coleta de termos, visando o processo de atualização da linguagem a partir também da perspectiva do usuário; disponibilizar e permitir a acessibilidade da linguagem para que o bibliotecário possa realizar a representação da informação com ela e a partir dela; disponibilizar e permitir a acessibilidade da linguagem para que o usuário possa realizar a busca por assunto, para a recuperação de informações úteis, com ela e a partir dela. Consideramos que a diversidade de públicos apontada nos protocolos verbais diz respeito a fatores como idade, níveis de escolaridade (conhecimento) e áreas de especialidades distintas, o que nos faz refletir e recomendar às UITCs do IFSP a construção e uso de uma linguagem documentária com vocabulários advindos da linguagem natural e de especialidade (como ocorre em um tesauro), porém com a estrutura lógico-semântica entre os termos/cabeçalhos de uma lista de cabeçalhos de assunto, contemplando também as notas de escopo que se fizerem necessárias
APA, Harvard, Vancouver, ISO, and other styles
19

Salamon, Justin J. "Melody extraction from polyphonic music signals." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123777.

Full text
Abstract:
Music was the first mass-market industry to be completely restructured by digital technology, and today we can have access to thousands of tracks stored locally on our smartphone and millions of tracks through cloud-based music services. Given the vast quantity of music at our fingertips, we now require novel ways of describing, indexing, searching and interacting with musical content. In this thesis we focus on a technology that opens the door to a wide range of such applications: automatically estimating the pitch sequence of the melody directly from the audio signal of a polyphonic music recording, also referred to as melody extraction. Whilst identifying the pitch of the melody is something human listeners can do quite well, doing this automatically is highly challenging. We present a novel method for melody extraction based on the tracking and characterisation of the pitch contours that form the melodic line of a piece. We show how different contour characteristics can be exploited in combination with auditory streaming cues to identify the melody out of all the pitch content in a music recording using both heuristic and model-based approaches. The performance of our method is assessed in an international evaluation campaign where it is shown to obtain state-of-the-art results. In fact, it achieves the highest mean overall accuracy obtained by any algorithm that has participated in the campaign to date. We demonstrate the applicability of our method both for research and end-user applications by developing systems that exploit the extracted melody pitch sequence for similarity-based music retrieval (version identification and query-by-humming), genre classification, automatic transcription and computational music analysis. The thesis also provides a comprehensive comparative analysis and review of the current state-of-the-art in melody extraction and a first of its kind analysis of melody extraction evaluation methodology.
La industria de la música fue una de las primeras en verse completamente reestructurada por los avances de la tecnología digital, y hoy en día tenemos acceso a miles de canciones almacenadas en nuestros dispositivos móviles y a millones más a través de servicios en la nube. Dada esta inmensa cantidad de música al nuestro alcance, necesitamos nuevas maneras de describir, indexar, buscar e interactuar con el contenido musical. Esta tesis se centra en una tecnología que abre las puertas a nuevas aplicaciones en este área: la extracción automática de la melodía a partir de una grabación musical polifónica. Mientras que identificar la melodía de una pieza es algo que los humanos pueden hacer relativamente bien, hacerlo de forma automática presenta mucha complejidad, ya que requiere combinar conocimiento de procesado de señal, acústica, aprendizaje automático y percepción sonora. Esta tarea se conoce en el ámbito de investigación como “extracción de melodía”, y consiste técnicamente en estimar la secuencia de alturas correspondiente a la melodía predominante de una pieza musical a partir del análisis de la señal de audio. Esta tesis presenta un método innovador para la extracción de la melodía basado en el seguimiento y caracterización de contornos tonales. En la tesis, mostramos cómo se pueden explotar las características de contornos en combinación con reglas basadas en la percepción auditiva, para identificar la melodía a partir de todo el contenido tonal de una grabación, tanto de manera heurística como a través de modelos aprendidos automáticamente. A través de una iniciativa internacional de evaluación comparativa de algoritmos, comprobamos además que el método propuesto obtiene resultados punteros. De hecho, logra la precisión más alta de todos los algoritmos que han participado en la iniciativa hasta la fecha. Además, la tesis demuestra la utilidad de nuestro método en diversas aplicaciones tanto de investigación como para usuarios finales, desarrollando una serie de sistemas que aprovechan la melodía extraída para la búsqueda de música por semejanza (identificación de versiones y búsqueda por tarareo), la clasificación del estilo musical, la transcripción o conversión de audio a partitura, y el análisis musical con métodos computacionales. La tesis también incluye un amplio análisis comparativo del estado de la cuestión en extracción de melodía y el primer análisis crítico existente de la metodología de evaluación de algoritmos de este tipo
La indústria musical va ser una de les primeres a veure's completament reestructurada pels avenços de la tecnologia digital, i avui en dia tenim accés a milers de cançons emmagatzemades als nostres dispositius mòbils i a milions més a través de serveis en xarxa. Al tenir aquesta immensa quantitat de música al nostre abast, necessitem noves maneres de descriure, indexar, buscar i interactuar amb el contingut musical. Aquesta tesi es centra en una tecnologia que obre les portes a noves aplicacions en aquesta àrea: l'extracció automàtica de la melodia a partir d'una gravació musical polifònica. Tot i que identificar la melodia d'una peça és quelcom que els humans podem fer relativament fàcilment, fer-ho de forma automàtica presenta una alta complexitat, ja que requereix combinar coneixement de processament del senyal, acústica, aprenentatge automàtic i percepció sonora. Aquesta tasca es coneix dins de l'àmbit d'investigació com a “extracció de melodia”, i consisteix tècnicament a estimar la seqüència de altures tonals corresponents a la melodia predominant d'una peça musical a partir de l'anàlisi del senyal d'àudio. Aquesta tesi presenta un mètode innovador per a l'extracció de la melodia basat en el seguiment i caracterització de contorns tonals. Per a fer-ho, mostrem com es poden explotar les característiques de contorns combinades amb regles basades en la percepció auditiva per a identificar la melodia a partir de tot el contingut tonal d'una gravació, tant de manera heurística com a través de models apresos automàticament. A més d'això, comprovem a través d'una iniciativa internacional d'avaluació comparativa d'algoritmes que el mètode proposat obté resultats punters. De fet, obté la precisió més alta de tots els algoritmes proposats fins la data d'avui. A demés, la tesi demostra la utilitat del mètode en diverses aplicacions tant d'investigació com per a usuaris finals, desenvolupant una sèrie de sistemes que aprofiten la melodia extreta per a la cerca de música per semblança (identificació de versions i cerca per taral•larà), la classificació de l'estil musical, la transcripció o conversió d'àudio a partitura, i l'anàlisi musical amb mètodes computacionals. La tesi també inclou una àmplia anàlisi comparativa de l'estat de l'art en extracció de melodia i la primera anàlisi crítica existent de la metodologia d'avaluació d'algoritmes d'aquesta mena.
APA, Harvard, Vancouver, ISO, and other styles
20

"A systemic methodology for planning and developing computerized business information system." Chinese University of Hong Kong, 1988. http://library.cuhk.edu.hk/record=b5885923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gray, Pamela N., University of Western Sydney, College of Business, and School of Law. "Legal knowledge engineering methodology for large-scale expert systems." 2007. http://handle.uws.edu.au:8081/1959.7/20012.

Full text
Abstract:
Legal knowledge engineering methodology for epistemologically sound, large scale legal expert systems is developed in this dissertation. A specific meta-epistemological method is posed for the transformation of legal domain epistemology to large scale legal expert systems; the method has five stages: 1. domain epistemology; 2. computational domain epistemology; 3. shell epistemology; 4. programming epistemology; and 5. application epistemology and ontology. The nature of legal epistemology is defined in terms of a deep model that divides the information of the ontology of legal possibilities into the three sorts of logic premises, namely, (1) rules of law for extended deduction, (2) material facts of cases for induction that establishes rule antecedents, and (3) reasons for rules, including justifications, explanations or criticisms of rules, for abduction. Extended deduction is distinguished for automation, and provides a map for locating, relatively, associated induction and abduction. Added to this is a communication system that involves issues of cognition and justice in the legal system. The Appendix sets out a sample of draft rule maps of the United Nations Convention on Contracts for the International Sale of Goods, known as the Vienna Convention, to illustrate that the substantive epistemology of the international law can be mapped to the generic epistemology of the shell. This thesis deflects the ontological solution back to the earlier rule-based, case-based and logic advances, with a definition of artificial legal intelligence that rests on legal epistemology; added to the definition is a transparent communication system of a user interface, including an interactive visualisation of rule maps, and the heuristics that process input and produce output to give effect to the legal intelligence of an application. The additions include an epistemological use of the ontology of legal possibilities to complete legal logic, for the purposes of processing specific legal applications. While the specific meta-epistemological methodology distinguishes domain epistemology from the epistemologies of artificial legal intelligence, namely computational domain epistemology, program design epistemology, programming epistemology and application epistemology, the prototypes illustrate the use of those distinctions, and the synthesis effected by that use. The thesis develops the Jurisprudence of Legal Knowledge Engineering by an artificial metaphysics.
Doctor of Philosophy (Ph.D)
APA, Harvard, Vancouver, ISO, and other styles
22

Lui, Keith J., University of Western Sydney, College of Health and Science, and School of Computing and Mathematics. "A score for measuring the quality of controlled experiments in computing and health informatics." 2008. http://handle.uws.edu.au:8081/1959.7/38518.

Full text
Abstract:
The controlled experiment is a highly regarded form of scientific inquiry because its properties permit conclusions with the most scientific rigor. Controlled experimentation is important for the scientific foundation of disciplines that claim to be scientific. It is also important to conduct them properly: they come at a high cost in time, effort and participation; there is an associated esteem that confers credibility; there is often an ethical responsibility to human participants. However, the quality of controlled experiments performed in health informatics and computer science is often poor. One way to address quality issues is to measure quality. This follows the example of the creation of instruments (scales or scores) to measure quality of controlled medical trials, which have also had problems with experimental quality. The rationale for this research was that no satisfactory scales had been developed for informatics. There is also no empirical research into the construct of experimental quality in informatics, which this research addresses.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
23

Lourenço, Artur Pedro Duarte Reis Bastos. "A new methodology for the analysis and validation of clusters and biclusters of genes." Master's thesis, 2006. http://hdl.handle.net/10451/14049.

Full text
Abstract:
Tese de mestrado em Bioinformática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2006
A era da pós-genómica e das tecnologias de larga escala traz consigo a necessidade de desenvolver novos métodos para lidar com grandes quantidades de dados. Para tal, têm sido aplicados algoritmos de clustering e biclustering em bioinformática para descobrir padrões em dados biológicos. A validação dos resultados de clustering e de biclustering é essencial para a sua análise. Esta dissertação propõe uma nova metodologia para validar resultados de clustering e biclustering. A metodologia transpõe conceitos do algoritmo PageRank para ordenação dos termos da Gene Ontology associados a um cluster. O significado biológico de cada conjunto de genes é determinado pelos termos no topo da ordenação. A metodologia de validação foi concretizada numa nova ferramenta, denominada TermRank, e foi avaliada através da caracterização de um conjunto de clusters artificiais. A metodologia foi também utilizada para validar o resultado de um algoritmo de biclustering aplicado a dados reais de um estudo sobre a resposta global de Saccharomyces cerevisiae a um stress químico. A avaliação da ferramenta TermRank mostrou que esta produz caracterizações correctas dos clusters gerados artificialmente e que o algoritmo de biclustering gera biclusters compostos por genes relacionados entre si.
The era of post-genomics and high-throughput technologies brings the need for developing new methods to cope with very large amounts of data. Clustering and biclustering algorithms have been used in bioinformatics to discover patterns in biological data. The validation of clustering and biclustering results is essential for their analysis. This dissertation presents a new methodology for validating and characterizing clustering and biclustering results, which uses PageRank concepts to rank Gene Ontology terms. The top ranked terms associated to each set of genes describe their biological interpretation. The validation methodology was implemented in a new tool, designated TermRank, and was evaluated through characterization of a set of artificial clusters. The methodology was also used to validate the output of a biclustering algorithm applied to real data from a study of the global response of Saccharomyces cerevisiae to a chemical stress. The evaluation showed that TermRank produces correct characterizations of the artificially generated clusters and that the biclusters generated by the validated biclustering algorithm are composed of related genes.
APA, Harvard, Vancouver, ISO, and other styles
24

Newsom, Eric Tyner. "An exploratory study using the predicate-argument structure to develop methodology for measuring semantic similarity of radiology sentences." Thesis, 2013. http://hdl.handle.net/1805/3666.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The amount of information produced in the form of electronic free text in healthcare is increasing to levels incapable of being processed by humans for advancement of his/her professional practice. Information extraction (IE) is a sub-field of natural language processing with the goal of data reduction of unstructured free text. Pertinent to IE is an annotated corpus that frames how IE methods should create a logical expression necessary for processing meaning of text. Most annotation approaches seek to maximize meaning and knowledge by chunking sentences into phrases and mapping these phrases to a knowledge source to create a logical expression. However, these studies consistently have problems addressing semantics and none have addressed the issue of semantic similarity (or synonymy) to achieve data reduction. To achieve data reduction, a successful methodology for data reduction is dependent on a framework that can represent currently popular phrasal methods of IE but also fully represent the sentence. This study explores and reports on the benefits, problems, and requirements to using the predicate-argument statement (PAS) as the framework. A convenient sample from a prior study with ten synsets of 100 unique sentences from radiology reports deemed by domain experts to mean the same thing will be the text from which PAS structures are formed.
APA, Harvard, Vancouver, ISO, and other styles
25

Müller, Martin Eric. "Inducing Conceptual User Models." Doctoral thesis, 2002. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2002042911.

Full text
Abstract:
User Modeling and Machine Learning for User Modeling have both become important research topics and key techniques in recent adaptive systems. One of the most intriguing problems in the `information age´ is how to filter relevant information from the huge amount of available data. This problem is tackled by using models of the user´s interest in order to increase precision and discriminate interesting information from un-interesting data. However, any user modeling approach suffers from several major drawbacks: User models built by the system need to be inspectable and understandable by the user himself. Secondly, users in general are not willing to give feedback concerning user satisfaction by the delivered results. Without any evidence for the user´s interest, it is hard to induce a hypothetical user model at all. Finally, most current systems do not draw a line of distinction between domain knowledge and user model which makes the adequacy of a user model hard to determine. This thesis presents the novel approach of conceptual user models. Conceptual user models are easy to inspect and understand and allow for the system to explain its actions to the user. It is shown, that ILP can be applied for the task of inducing user models from feedback, and a method for using mutual feedback for sample enlargement is introduced. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine called OySTER.
APA, Harvard, Vancouver, ISO, and other styles
26

Pandit, Yogesh. "Context specific text mining for annotating protein interactions with experimental evidence." Thesis, 2014. http://hdl.handle.net/1805/3809.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Proteins are the building blocks in a biological system. They interact with other proteins to make unique biological phenomenon. Protein-protein interactions play a valuable role in understanding the molecular mechanisms occurring in any biological system. Protein interaction databases are a rich source on protein interaction related information. They gather large amounts of information from published literature to enrich their data. Expert curators put in most of these efforts manually. The amount of accessible and publicly available literature is growing very rapidly. Manual annotation is a time consuming process. And with the rate at which available information is growing, it cannot be dealt with only manual curation. There need to be tools to process this huge amounts of data to bring out valuable gist than can help curators proceed faster. In case of extracting protein-protein interaction evidences from literature, just a mere mention of a certain protein by look-up approaches cannot help validate the interaction. Supporting protein interaction information with experimental evidence can help this cause. In this study, we are applying machine learning based classification techniques to classify and given protein interaction related document into an interaction detection method. We use biological attributes and experimental factors, different combination of which define any particular interaction detection method. Then using predicted detection methods, proteins identified using named entity recognition techniques and decomposing the parts-of-speech composition we search for sentences with experimental evidence for a protein-protein interaction. We report an accuracy of 75.1% with a F-score of 47.6% on a dataset containing 2035 training documents and 300 test documents.
APA, Harvard, Vancouver, ISO, and other styles
27

Carney, Timothy Jay. "An Organizational Informatics Analysis of Colorectal, Breast, and Cervical Cancer Screening Clinical Decision Support and Information Systems within Community Health Centers." Thesis, 2013. http://hdl.handle.net/1805/3243.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
A study design has been developed that employs a dual modeling approach to identify factors associated with facility-level cancer screening improvement and how this is mediated by the use of clinical decision support. This dual modeling approach combines principles of (1) Health Informatics, (2) Cancer Prevention and Control, (3) Health Services Research, and (4) Organizational Change/Theory. The study design builds upon the constructs of a conceptual framework developed by Jane Zapka, namely, (1) organizational and/or practice settings, (2) provider characteristics, and (3) patient population characteristics. These constructs have been operationalized as measures in a 2005 HRSA/NCI Health Disparities Cancer Collaborative inventory of 44 community health centers. The first, statistical models will use: sequential, multivariable regression models to test for the organizational determinants that may account for the presence and intensity-of-use of clinical decision support (CDS) and information systems (IS) within community health centers for use in colorectal, breast, and cervical cancer screening. A subsequent test will assess the impact of CDS/IS on provider reported cancer screening improvement rates. The second, computational models will use a multi-agent model of network evolution called CONSTRUCT® to identify the agents, tasks, knowledge, groups, and beliefs associated with cancer screening practices and CDS/IS use to inform both CDS/IS implementation and cancer screening intervention strategies. This virtual experiment will facilitate hypothesis-generation through computer simulation exercises. The outcome of this research will be to identify barriers and facilitators to improving community health center facility-level cancer screening performance using CDS/IS as an agent of change. Stakeholders for this work include both national and local community health center IT leadership, as well as clinical managers deploying IT strategies to improve cancer screening among vulnerable patient populations.
APA, Harvard, Vancouver, ISO, and other styles
28

Lombard, Orpha Cornelia. "The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraft." Diss., 2014. http://hdl.handle.net/10500/14411.

Full text
Abstract:
This dissertation describes a research study conducted to determine the benefits and use of ontology technologies to support a simulation environment that evaluates countermeasures employed to protect military aircraft. Within the military, aircraft represent a significant investment and these valuable assets need to be protected against various threats, such as man-portable air-defence systems. To counter attacks from these threats, countermeasures are deployed, developed and evaluated by utilising modelling and simulation techniques. The system described in this research simulates real world scenarios of aircraft, missiles and countermeasures in order to assist in the evaluation of infra-red countermeasures against missiles in specified scenarios. Traditional ontology has its origin in philosophy, describing what exists and how objects relate to each other. The use of formal ontologies in Computer Science have brought new possibilities for modelling and representation of information and knowledge in several domains. These advantages also apply to military information systems where ontologies support the complex nature of military information. After considering ontologies and their advantages against the requirements for enhancements of the simulation system, an ontology was constructed by following a formal development methodology. Design research, combined with the adaptive methodology of development, was conducted in a unique way, therefore contributing to establish design research as a formal research methodology. The ontology was constructed to capture the knowledge of the simulation system environment and the use of it supports the functions of the simulation system in the domain. The research study contributes to better communication among people involved in the simulation studies, accomplished by a shared vocabulary and a knowledge base for the domain. These contributions affirmed that ontologies can be successfully use to support military simulation systems
Computing
M. Tech. (Information Technology)
APA, Harvard, Vancouver, ISO, and other styles
29

Ingram, Annette. "Argivale inligtingontsluiting en -herwinning vir die historiese navorser." Thesis, 2000. http://hdl.handle.net/10500/16988.

Full text
Abstract:
Summaries in Afrikaans and English
Afrikaans text
Die doel van hierdie studie was om argivale inligtingontsluiting en -herwinning aan die einde van die 20ste eeu te ondersoek, verai met betrekking tot ernstige historiese navorsing. Inligting is op die volgende wyses ingesamel: ‘n uitgebreide literatuurondersoek, onderhoude met argivarlsse in beide staats- en privaatargiefbewaarpiekke en ‘n empiriese ondersoek deur middel van ‘n vraelys wat aan hoofsaaklik ernstige historiese navorsers versprei is. Die navorser het argivale vindmiddels soos inventarisse, gidse en indekse, sowel as die gerekenariseerde argivale databases, persoonlik ondersoek ten einde eerstehandse kennis van die voordele en nadele van hierdie navorsings- hulpmiddets te verkry. Daar is gevind dat tegnologiese ontwikkelings die aard van argiefbewaarplekke en argivale bronne verander het. Die impak van rekenaarnetwerke op die argivale milieu, sowel as die voor- en nadele verbonde aan die hantering van elektroniese argivale rekords en mondelinge geskiedenisargiewe, is gevolglik in besonderhede bespreek, Hoewel die ontsluiting van argivale bronmateriaai steeds op die beginsels van herkoms en oorspronklike orde gebaseer is, is sekere aanpassings noodsaaklik. Tog is geen toegang tot argivale inligtingbronne sonder hie rdie prosesse moontlik nie, Doeltreffende argivale inligtingherwinning kan slegs verwesenlik word indien genoeg fondse en opgeleide, ervare personeel beskikbaar gestel word. Vervolgens Is die veranderende aard van historiese navorsing belig, verat wat die keuse van ondemverpe betref. Moderne tendense ten opsigte van die geskiedenis van benede, of die geskiedenis van die aiiedaagse lewe, en die geskiedenis van vroue, in teenstelling met tradisionele historiese nadruk op politieke figure en gebeure, is bespreek. Die studie het verder aangetoon dat toeganklikheid tot argivale inligtingbronne die belangrikste behoefte van die historiese navorser is. Die beduidende rol wat die argivaris en argivale vindmiddels in historiese navorsing speel, is beklemtoon. Gedurende die empiriese fase is 'n ontleding van die antwoorde van respondente ten opsigte van navorsingsbesoeke aan argiefbewaarplekke gedoen, Historiese navorsers se ondervinding met betrekking tot argivale vindmiddels, gerekenariseerde argivale netwerke en leeskamerpersoneel is bespreek. Die ondersoek word afgesluit met beiangrike bevindings en 'n aantal aanbevelings rakende historiese navorsing as ‘n argivale aktiwiteit in 'n veranderende inligtingwereld.
The purpose of this study was to investigate archival information organisation and retrieval at the end of the 20th century, especially with regard to serious historical research. Information was collected by the following means: an extensive literature survey, interviews with archivists in both state and private archives and an empirical survey by means of a questionnaire distributed amongst mainly serious historical resea rchers.The researcher personally examined archival finding aids such as inventories, guides and indexes, as well as the computerised archival database, for firsthand knowledge of the advantages and disadvantages of these research aids. It was found that technological developments had changed the nature of archives and archival sources, the most important adjustment being to electronic information sources and oral history archives. The impact of computer networks on the archival milieu, as well as the advantages and disadvantages of dealing with electronic archival records and oral history archives, was subsequently discussed in detail. Although the organisation and description of archival source material are still based on the principles of provenance and original order, certain adaptations are necessary. Without these processes no access to archival sources is possible. Effective archival information retrieval can only be achieved if sufficient funds are made available and well-trained, experienced staff are appointed. Subsequently the changing nature of historical research, especially with regard to the choice of research topics, was discussed. Modern tendencies such as history from below, or the history of everyday life, and the history of women, were investigated, in opposition to traditional historical emphasis on important political figures and happenings. Research further showed that accessibility to archival information sources is of paramount importance to the historical researcher. The important role of the archivist and archival finding aids, is emphasised. During the empirical phase the answers of respondents about their visits to archives were analysed. The experiences of historical researchers with regard to archival finding aids, computerised archival networks, and reading room staff, are discussed. The study is concluded with important findings and a number of recommendations pertaining to historical research as an archival activity in a changing information world.
Information Science
D. Litt. et Phil. (Information Science)
APA, Harvard, Vancouver, ISO, and other styles
30

Park, Seong Cheol. "Indianapolis emergency medical service and the Indiana Network for Patient Care : evaluating the patient match process." Thesis, 2014. http://hdl.handle.net/1805/3808.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In 2009, Indianapolis Emergency Medical Service (I-EMS, formerly Wishard Ambulance Service) launched an electronic medical record system within their ambulances and started to exchange patient data with the Indiana Network for Patient Care (INPC). This unique system allows EMS personnel in an ambulance to get important medical information prior to the patient’s arrival to the accepting hospital from incident scene. In this retrospective cohort study, we found EMS personnel made 3,021 patient data requests (14%) of 21,215 EMS transports during a one-year period, with a “success” match rate of 46%, and a match “failure” rate of 17%. The three major factors for causing match “failure” were (1) ZIP code 55%, (2) Patient Name 22%, and (3) Birth Date 12%. This study shows that the ZIP code is not a robust identifier in the patient identification process and Non-ZIP code identifiers may be a better choice due to inaccuracies and changes of the ZIP code in a patient’s record.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography