Dissertations / Theses on the topic 'Biomedical Informatics'

To see the other types of publications on this topic, follow the link: Biomedical Informatics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Biomedical Informatics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Moffitt, Richard Austin. "Quality control for translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/34721.

Full text
Abstract:
Translational biomedical informatics is the application of computational methods to facilitate the translation of basic biomedical science to clinical relevance. An example of this is the multi-step process in which large-scale microarray-based discovery experiments are refined into reliable clinical tests. Unfortunately, the quality of microarray data is a major issue that must be addressed before microarrays can reach their full potential as a clinical molecular profiling tool for personalized and predictive medicine. A new methodology, titled caCORRECT, has been developed to replace or augment existing microarray processing technologies, in order to improve the translation of microarray data to clinical relevance. Results of validation studies show that caCORRECT is able to improve the mean accuracy of microarray gene expression by as much as 60%, depending on the magnitude and size of artifacts on the array surface. As part of a case study to demonstrate the widespread usefulness of caCORRECT, the entire pipeline of biomarker discovery has been executed for the clinical problem of classifying Renal Cell Carcinoma (RCC) specimens into appropriate subtypes. As a result, we have discovered and validated a novel two-gene RT-PCR assay, which has the ability to diagnose between the Clear Cell and Oncocytoma RCC subtypes with near perfect accuracy. As an extension to this work, progress has been made towards a quantitative quantum dot immunohistochemical assay, which is expected to be more clinically viable than a PCR-based test.
APA, Harvard, Vancouver, ISO, and other styles
2

Stokes, Todd Hamilton. "Development of a visualization and information management platform in translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33967.

Full text
Abstract:
Translational Biomedical Informatics (TBMI) is an emerging discipline expanding beyond traditional bioinformatics, with a focus on developing computational technologies for real-world biomedical practice. The goal of my Ph.D. research is to address a few key challenges in TBI, including: (1) the high quality and reproducibility required by medical applications when processing high throughput data, (2) the need for knowledge management solutions that allow molecular data to be handled and evaluated by researchers, regulators, and doctors collectively, (3) the need for near real-time, efficient access to decision-oriented visualizations of integrated data and data processing results, and (4) the need for an integrated solution that can evolve as medical consensus evolves, without requiring retraining, overhaul or replacement. This dissertation resulted in the development and adoption of concrete web-based application deliverables in regular use by bioinformaticians, clinicians, biologists and nanotechnologists. These include: the Chip Artifact Correction (caCORRECT) web site and grid services, the ArrayWiki community microarray repository, and the SimpleVisGrid visualization grid services (including eGOMiner, nanoDRIVE, PathwayVis and SphingoVisGrid).
APA, Harvard, Vancouver, ISO, and other styles
3

Cao, Xi Hang. "On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/586006.

Full text
Abstract:
Computer and Information Science
Ph.D.
Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Koay, Pei P. "(Re)presenting Human Population Database Projects: virtually designing and siting biomedical informatics ventures." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/27709.

Full text
Abstract:
This dissertation examines the politics of representation in biotechnosciences. Through web representations, I examine three emerging endeavors that propose to create large-scale human population genomic databases to study complex, common diseases and conditions. These projects were initiated in different nations (US, UK, and Iceland), created under different institutional configurations, and are at various stages of development. The websites, which are media technologies do not simply reflect and promote these endeavors. Rather, they help shape these database projects in which the science is uncertain and the technologies not yet built. Thus, they are constitutive technologies that affect the construction of these database projects. More needs to be done to explore how to interpret the 'virtual' realm and how it relates to the 'real' world and specific situations. By bringing hypertextuality into the analysis, I explore how knowledges, practices, and subjectivities are created. By adapting the methods of a number of science and technology (STS) authors, I develop a more dynamic lens in which to investigate web representations and 'emerging' biomedical projects. My concern however, is not only in what represents what, but how representations are constructed. The power of the latter derives from its invisibility. In re-conceptualizing representation and new media technologies, I show that these sites are techno-social spaces for creating knowledge, specific ways of seeing, and practicing biomedicine today. The narrowing time/space between generating data, releasing information, and incorporating publics into their endeavors raises crucial issues as to how biomedicine is represented and how broader audiences are engaged. In the dominant discourses, these projects are all situated within biomedical, (post)genomic, and information revolutions. Here, they hang on the technological object, the database, with the ability to contain what we are coming to understand as life/genetic/bio information. Through the moves of both treating these databases as part of a complex system and investigating them through a lens of representation, I begin to include potential participants and broader audiences into the analysis. Informatic bodies, populations, and subjects are co-created at, by, and through these sites as the developing database projects and information are (re)presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Samuel, Jarvie John. "Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule Discovery." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30508/.

Full text
Abstract:
Extracting information from a stack of data is a tedious task and the scenario is no different in proteomics. Volumes of research papers are published about study of various proteins in several species, their interactions with other proteins and identification of protein(s) as possible biomarker in causing diseases. It is a challenging task for biologists to keep track of these developments manually by reading through the literatures. Several tools have been developed by computer linguists to assist identification, extraction and hypotheses generation of proteins and protein-protein interactions from biomedical publications and protein databases. However, they are confronted with the challenges of term variation, term ambiguity, access only to abstracts and inconsistencies in time-consuming manual curation of protein and protein-protein interaction repositories. This work attempts to attenuate the challenges by extracting protein-protein interactions in humans and elicit possible interactions using associative rule mining on full text, abstracts and captions from figures available from publicly available biomedical literature databases. Two such databases are used in our study: Directory of Open Access Journals (DOAJ) and PubMed Central (PMC). A corpus is built using articles based on search terms. A dataset of more than 38,000 protein-protein interactions from the Human Protein Reference Database (HPRD) is cross-referenced to validate discovered interactive pairs. A set of an optimal size of possible binary protein-protein interactions is generated to be made available for clinician or biological validation. A significant change in the number of new associations was found by altering the thresholds for support and confidence metrics. This study narrows down the limitations for biologists in keeping pace with discovery of protein-protein interactions via manually reading the literature and their needs to validate each and every possible interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Radovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature." Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Milosevic, Nikola. "A multi-layered approach to information extraction from tables in biomedical documents." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/a-multilayered-approach-to-information-extraction-from-tables-in-biomedical-documents(c2edce9c-ae7f-48fa-81c2-14d4bb87423e).html.

Full text
Abstract:
The quantity of literature in the biomedical domain is growing exponentially. It is becoming impossible for researchers to cope with this ever-increasing amount of information. Text mining provides methods that can improve access to information of interest through information retrieval, information extraction and question answering. However, most of these systems focus on information presented in main body of text while ignoring other parts of the document such as tables and figures. Tables present a potentially important component of research presentation, as authors often include more detailed information in tables than in textual sections of a document. Tables allow presentation of large amounts of information in relatively limited space, due to their structural flexibility and ability to present multi-dimensional information. Table processing encapsulates specific challenges that table mining systems need to take into account. Challenges include a variety of visual and semantic structures in tables, variety of information presentation formats, and dense content in table cells. The work presented in this thesis examines a multi-layered approach to information extraction from tables in biomedical documents. In this thesis we propose a representation model of tables and a method for table structure disentangling and information extraction. The model describes table structures and how they are read. We propose a method for information extraction that consists of: (1) table detection, (2) functional analysis, (3) structural analysis, (4) semantic tagging, (5) pragmatic analysis, (6) cell selection and (7) syntactic processing and extraction. In order to validate our approach, show its potential and identify remaining challenges, we applied our methodology to two case studies. The aim of the first case study was to extract baseline characteristics of clinical trials (number of patients, age, gender distribution, etc.) from tables. The second case study explored how the methodology can be applied to relationship extraction, examining extraction of drug-drug interactions. Our method performed functional analysis with a precision score of 0.9425, recall score of 0.9428 and F1-score of 0.9426. Relationships between cells were recognized with a precision of 0.9238, recall of 0.9744 and F1-score of 0.9484. The information extraction methodology performance is the state-of-the-art in table information extraction recording an F1-score range of 0.82-0.93 for demographic data, adverse event and drug-drug interaction extraction, depending on the complexity of the task and available semantic resources. Presented methodology demonstrated that information can be efficiently extracted from tables in biomedical literature. Information extraction from tables can be important for enhancing data curation, information retrieval, question answering and decision support systems with additional information from tables that cannot be found in the other parts of the document.
APA, Harvard, Vancouver, ISO, and other styles
8

Raje, Satyajeet. "ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical Research." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354657305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Templeton, James Robert. "Trust and Trustworthiness: A Framework for Successful Design of Telemedicine." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/321.

Full text
Abstract:
Trust and its antecedents have been demonstrated as a barrier to the successful adoption of numerous fields of technology, most notably e-commerce, and may be a key factor in the lack of adoption or adaptation in the field of telemedicine. In the medical arena, trust is often formed through the relationships cultivated over time via clinician and patient. Trust and interpersonal relationships may also play a significant role in the adoption of telemedicine. The idea of telemedicine has been explored for nearly 30 years in one form or another. Yet, despite grandiose promises of how it will someday significantly improve the healthcare system, the field continues to lag behind other areas of technology by 10 to 15 years. The reasons for the lack of adoption may be many given the barriers that have been observed by other researchers with regards to trust and trustworthiness. This study examined the role of trust from various aspects within telemedicine, with particular emphasis on the role that trust plays in the adoption and adaptation of a telemedicine system. Simulators examined the role of trust in the treatment and management of diabetes mellitus (common illness) in order to assess the impact and role of trust components. Surveys of the subjects were conducted to capture the trust dynamics, as well as the development of a framework for successful implementation of telemedicine using trust and trustworthiness as a foundation. Results indicated that certain attributes do influence the level of trust in the system. The framework developed demonstrated that medical content, disease state management, perceived patient outcomes, and design all had significant impact on trust of the system.
APA, Harvard, Vancouver, ISO, and other styles
10

Adejare, Adeboye A. Jr. "Equiformatics: Informatics Methods and Tools to Investigate and Address Health Disparities and Inequities." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623164833455566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lei, Xin. "Analyzing “Design + Medical” Collaboration Using Participatory Action Research (PAR): A Case Study of the Oxygen Saturation Data Display Project at Cincinnati Children’s Hospital Medical Center." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427983695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yvanoff, Marie. "LC sensor for biological tissue characterization /." Online version of thesis, 2008. http://hdl.handle.net/1850/8044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rahimi, Bahol. "Implementation of Health Information Systems." Licentiate thesis, Linköping University, Linköping University, MDA - Human Computer Interfaces, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15677.

Full text
Abstract:

Healthcare organizations now consider increased efficiency, reduced costs, improved patient care and quality of services, and safety when they are planning to implement new information and communication technology (ICT) based applications. However, in spite of enormous investment in health information systems (HIS), no convincing evidence of the overall benefits of HISs yet exists. The publishing of studies that capture the effects of the implementation and use of ICT-based applications in healthcare may contribute to the emergence of an evidence-based health informatics which can be used as a platform for decisions made by policy makers, executives, and clinicians. Health informatics needs further studies identifying the factors affecting successful HIS implementation and capturing the effects of HIS implementation. The purpose of the work presented in this thesis is to increase the available knowledge about the impact of the implementation and use of HISs in healthcare organizations. All the studies included in this thesis used qualitative research methods. A case study design and literature review were performed to collect data.

This thesis’s results highlight an increasing need to share knowledge, find methods to evaluate the impact of investments, and formulate indicators for success. It makes suggestions for developing or extending evaluation methods that can be applied to this area with a multi-actor perspective in order to understand the effects, consequences, and prerequisites that have to be achieved for the successful implementation and use of IT in healthcare. The results also propose that HIS, particularly integrated computer-based patient records (ICPR), be introduced to fulfill a high number of organizational, individualbased, and socio-technical goals at different levels. It is therefore necessary to link the goals that HIS systems are to fulfill in relation to short-term, middle-term, and long-term strategic goals. Another suggestion is that implementers and vendors should direct more attention to what has been published in the area to avoid future failures.

This thesis’s findings outline an updated structure for implementation planning. When implementing HISs in hospital and primary-care environments, this thesis suggests that such strategic actions as management involvement and resource allocation, such tactical action as integrating HIS with healthcare workflow, and such operational actions as user involvement, establishing compatibility between software and hardware, and education and training should be taken into consideration.

APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Tsung-Lin. "Classification models for disease diagnosis and outcome analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44918.

Full text
Abstract:
In this dissertation we study the feature selection and classification problems and apply our methods to real-world medical and biological data sets for disease diagnosis. Classification is an important problem in disease diagnosis to distinguish patients from normal population. DAMIP (discriminant analysis -- mixed integer program) was shown to be a good classification model, which can directly handle multigroup problems, enforce misclassification limits, and provide reserved judgement region. However, DAMIP is NP-hard and presents computational challenges. Feature selection is important in classification to improve the prediction performance, prevent over-fitting, or facilitate data understanding. However, this combinatorial problem becomes intractable when the number of features is large. In this dissertation, we propose a modified particle swarm optimization (PSO), a heuristic method, to solve the feature selection problem, and we study its parameter selection in our applications. We derive theories and exact algorithms to solve the two-group DAMIP in polynomial time. We also propose a heuristic algorithm to solve the multigroup DAMIP. Computational studies on simulated data and data from UCI machine learning repository show that the proposed algorithm performs very well. The polynomial solution time of the heuristic method allows us to solve DAMIP repeatedly within the feature selection procedure. We apply the PSO/DAMIP classification framework to several real-life medical and biological prediction problems. (1) Alzheimer's disease: We use data from several neuropsychological tests to discriminate subjects of Alzheimer's disease, subjects of mild cognitive impairment, and control groups. (2) Cardiovascular disease: We use traditional risk factors and novel oxidative stress biomarkers to predict subjects who are at high or low risk of cardiovascular disease, in which the risk is measured by the thickness of the carotid intima-media or/and the flow-mediated vasodilation. (3) Sulfur amino acid (SAA) intake: We use 1H NMR spectral data of human plasma to classify plasma samples obtained with low SAA intake or high SAA intake. This shows that our method helps for metabolomics study. (4) CpG islands for lung cancer: We identify a large number of sequence patterns (in the order of millions), search candidate patterns from DNA sequences in CpG islands, and look for patterns which can discriminate methylation-prone and methylation-resistant (or in addition, methylation-sporadic) sequences, which relate to early lung cancer prediction.
APA, Harvard, Vancouver, ISO, and other styles
15

Tucker, Jennifer. "Motivating Subjects: Data Sharing in Cancer Research." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29022.

Full text
Abstract:
This dissertation explores motivation in decision-making and action in science and technology, through the lens of a case study: scientific data sharing in cancer research. The research begins with the premise that motivation and emotion are key elements of what it means to be human, and consequently, are important variables in how individuals make decisions and take action. At the same time, institutional controls and social messaging send a variety of signals intended to motivate specific actions and behaviors. Understanding the interplay between personal motives and social influences may point to strategies that better align individual and social perceptions and discourse. To explore these dynamics, this research centers on a large-scale cancer research program led by the National Institutes of Healthâ s National Cancer Institute. The goal of the program is to encourage interoperability and data sharing between diverse and highly autonomous cancer centers across the U.S. Housed in an organization focused on biomedical informatics, the program has a technologically-focused mission; the goal is to facilitate institutional data sharing to connect the cancer research enterprise. This focus contrasts with the more relationship-based point-to-point data sharing currently reported by researchers as the norm. Researchers are motivated to share data with others under specific conditions: when there is a foundation of trust with the person or community being shared with; when the perceived reward of sharing is well-defined and of value to the person sharing; and when there is perceived to be a lower risk or cost than the benefit received. Without these conditions, there are often determined to be insufficient incentives and rewards for sharing. Data sharing is both a personal decision and a social level problem. Data is both subjective and personal; it is often an extension of researcherâ s identity, and serves as a measure of his or her value and capability. In the search for standards and interoperable data sets, institutional and technologically-mediated forms of data sharing are perceived to ignore the subjective and local knowledge embodied in the data being shared. To explore these dimensions, this study considers the technology, economics, legal elements, and personal sides of data sharing, and applies two conceptual frameworks to evaluate alternatives for action.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Choi, Ickwon. "Computational Modeling for Censored Time to Event Data Using Data Integration in Biomedical Research." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307969890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Santamaria, Suzanne Lamar. "Development of an ontology of animals in context within the OBO Foundry framework from a SNOMED-CT extension and subset." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32400.

Full text
Abstract:
Animal classification needs vary by use and application. The Linnaean taxonomy is an important animal classification scheme but does not portray key animal identifying information like sex, age group, physiologic stage, living environment and role in production systems such as farms. Ontologies are created and used for defining, organizing and classifying information in a domain to enable learning and sharing of information. This work develops an ontology of animal classes that form the basis for communication of animal identifying information among animal managers, medical professionals caring for animals and biomedical researchers involved in disciplines as diverse as wildlife ecology and dairy science. The Animals in Context Ontology (ACO) was created from an extension and subset of the Systematized Nomenclature of Medicine â Clinical Terms (SNOMED-CT). The principles of the Open Biological and Biomedical Ontologies (OBO) Foundry were followed and freely available tools were used. ACO includes normal development and physiologic animal classes as well animal classes where humans have assigned the animalâ s role. ACO is interoperable with and includes classes from other OBO Foundry ontologies such as the Gene Ontology (GO). Meeting many of the OBO Foundry principles was straightforward but difficulties were encountered with missing and problematic content in some of the OBO ontologies. Additions and corrections were submitted to four ontologies. Some information in ACO could not be represented formally because of inconsistency in husbandry practices. ACO classes are of interest to science, medicine and agriculture, and can connect information between animal and human systems to enable knowledge discovery.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Zink, Janet A. "Reducing Sepsis Mortality: A Cloud-Based Alert Approach." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5697.

Full text
Abstract:
The aim of this study is to examine the impact of a cloud-based CDS alerting system for SIRS, a precursor to sepsis, and sepsis itself, on adult patient and process outcomes at VCU Health System. The two main hypotheses are: 1) the implementation of cloud-based SIRS and sepsis alerts will lead to lower sepsis-related mortality and lower average length of stay, and 2) the implementation of cloud-based SIRS and sepsis alerts will lead to more frequent ordering of the Sepsis PowerPlan and more recording of sepsis diagnoses. To measure these outcomes, a pre-post study was conducted. A pre-implementation group diagnosed with sepsis within the year leading up to the alert intervention consisted of 1,551 unique inpatient visits, and the three-year post-implementation sample size was 9,711 visits, for a total cohort of 11,262 visits. Logistic regression and multiple linear regression were used to test the hypotheses. Study results showed that sepsis-related mortality was slightly higher after the implementation of SIRS alerts, but the presence of sepsis alerts did not have a significant relationship to mortality. The average length of stay and the total number of recorded sepsis diagnoses were higher after the implementation of both SIRS and sepsis alerts, while ordering of the Sepsis Initial Resuscitation PowerPlan was lower. There is preliminary evidence from this study that more sepsis diagnoses are made as a result of alert adoption, suggesting that clinicians can consider the implementation of these alerts in order to capture a higher number of sepsis diagnoses.
APA, Harvard, Vancouver, ISO, and other styles
19

Lindblad, Erik. "Designing a framework for simulating radiology information systems." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15211.

Full text
Abstract:

In this thesis, a very flexible framework for simulating RIS is designed to beused for Infobroker testing. Infobroker is an application developed by MawellSvenska AB that connects RIS and PACS to achieve interoperability by enablingimage and journal data transmission between radiology sites. To put the project in context, the field of medical informatics, RIS and PACS systems and common protocols and standards are explored. A proof-of-concept implementation of the proposed design shows its potential and verifies that it works. The thesis concludes that a more specialized approach is preferred.

APA, Harvard, Vancouver, ISO, and other styles
20

Ekman, Alexandra. "The use of the World Wide Web in epidemiological research /." Stockholm, 2006. http://diss.kib.ki.se/2006/91-7140-948-3/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gomez, William Ernesto Ardila. "Desenvolvimento de um sistema eletrônico para gestão de medicamentos não padronizados no Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto da Universidade de São Paulo (HCFMRP-USP)." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/17/17157/tde-06062017-165308/.

Full text
Abstract:
Introdução: Os medicamentos são importantes elementos da maioria dos esquemas terapêuticos cobertos pelo Sistema Único de Saúde (SUS), representando significativa parcela do orçamento no país. O Complexo de Saúde vinculado ao Hospital das Clínicas atende toda a região noroeste do Estado de São Paulo e de outras partes do estado e do país, como centro de referência em tratamentos de alta complexidade, sendo frequente a prescrição de medicamentos de alto custo (MAC). Estima-se que 75,4% do orçamento geral para compra de medicamentos do complexo HCRP-FMRP-USP, são dedicados à aquisição de medicação não padronizada (medicamento especial) num total de aproximadamente R$ 46.313.170,08 (2015). Sendo assim, ferramentas para controle não só da prescrição, como também da aquisição e seu uso são fundamentais para otimizar a gestão do Hospital, evoluindo de um caráter reativo a um proativo, no qual a tomada de decisões tenha como base um histórico e indicadores de casos apresentados no complexo. Objetivo: Desenvolver uma plataforma eletrônica baseada na rede mundial de computadores, que possibilite a gestão entendida como documentação, rastreabilidade e inter-relacionamento entre os componentes da cadeia de decisão de medicamentos considerados especiais no Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto da Universidade de São Paulo. Métodos: Compreendeu o desenvolvimento de um sistema que tem como características principais monitoramento, acompanhamento e controle da cadeia de decisão de medicamentos que são considerados especiais pela instituição. Este sistema também permite a tomada de decisões, o desenvolvimento de indicadores em tempo real para decisão administrativa e o controle que requer a cadeia de suprimento de medicamentos de alto custo em cada um dos seus componentes. Resultados: Maior e melhor comunicação entre as unidades de farmácia, o solicitante (médico), o Departamento de atenção à Saúde (DAS) e os locais do Complexo HC-FMRP-USP que compõem a cadeia de decisão do suprimento de medicamentos especiais (MAC); além disso, possibilitará organizar um histórico de dados que poderá ser transposto facilmente a indicadores para o plano assistencial à medida, que garanta a presença de um agente transformador. Conclusões: Uma plataforma eletrônica foi desenvolvida que permite armazenamento, gestão e o processamento de dados e informações respeito à cadeia de decisão do fornecimento de medicamentos não padronizados
Introduction: Medicines are important elements in health care, especially those covered by the Brazilian Unified Healthcare System - Sistema Único de Saúde (SUS), representing a significant portion of its budget. The health infrastructure linked to Hospital das Clínicas serves throughout the northwest region of State the São Paulo and other parts of the state and the country. It is, therefore, known as a reference center for highly complex treatments and, for this reason, frequently prescribes treatments with expensive drugs. Is estimated that 75.4% of the general budget of HCRP-FMRP-USP complex is dedicated to the acquisition of this type of medication, i.e., not standardized medication (special medication), that has a value of approximately USD $14.434.300 (2015). Therefore, tools for controlling not only the prescription, as well as the acquisition and use, becomes critical to optimize the management of the hospital, aiming to move from a reactive to proactive role, where decision-making is based on a history and on indicators of the cases presented in the complex. Objective: To develop an electronic platform based on the World Wide Web, which allows the management, documentation, traceability and interrelationship between the components of the considered decision chain of nonstandard medicines in the Clinics Hospital of Ribeirão Preto Medical School of the University of Sao Paulo. Methods: Include a software development that has, as main features, tracking, monitoring and control of decision chain of drugs, which are considered special by the institution. This software also allows making decisions, development of indicators in real-time and administrative decisions that require the regulatory control supply system of high cost of medicines in each of its components. Results: Further and improved communication between the pharmacy units, the applicant (physician), the Department of attention to health (DAS) and places from the HC-FMRP-USP complex that integrate the chain of decision of the special drug supply. Moreover, organize a data history, which easily can be implemented to indicators for the assistance plan as guaranteeing the presence of a transforming agent. Conclusions: Developed an electronic platform that enables storage, management and processing of data and information, considering the chain decision of nonstandard medicines supply.
APA, Harvard, Vancouver, ISO, and other styles
22

Serique, Kleberson Junio do Amaral. "Anotação de imagens radiológicas usando a web semântica para colaboração científica e clínica." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-10092012-155249/.

Full text
Abstract:
Este trabalho faz parte de um projeto maior, o Annotation and Image Markup Project, que tem o objetivo de criar uma base de conhecimento médico sobre imagens radiológicas para identificação, acompanhamento e reasoning acerca de lesões tumorais em pesquisas sobre câncer e consultórios médicos. Esse projeto está sendo desenvolvido em conjunto com o Radiological Sciences Laboratory da Stanford University. O problema específico, que será abordado nesse trabalho, é que a maior parte das informações semânticas sobre imagens radiológicas não são capturados e relacionados às mesmas usando termos de ontologias biomédicas e padrões, como o RadLex e DICOM, o que impossibilita a sua avaliação automática por computadores, busca em arquivos médicos em hospitais, etc. Para tratar isso, os radiologistas precisam de uma solução computacional fácil, intuitiva e acessível para adicionar essas informações. Nesse trabalho foi desenvolvida uma solução Web para inclusão dessas anotações, o sistema ePAD. O aplicativo permite a recuperação de imagens médicas, como as imagens disponíveis em sistemas de informação hospitalares (PACS), o delineamento dos contornos de lesões tumorais, a associação de termos ontológicos a esses contornos e o armazenamento desses termos em uma base de conhecimento. Os principais desafios desse trabalho envolveram a aplicação de interfaces intuitivas baseadas em Rich Internet Applications e sua operação a partir de um navegador Web padrão. O primeiro protótipo funcional do ePAD atingiu seus objetivos ao demonstrar sua viabilidade técnica, sendo capaz de executar o mesmo trabalho básico de anotação de aplicações Desktop, como o OsiriX-iPad, sem o mesmo overhead. Também mostrou a sua utilidade a comunidade médica o que gerou o interesse de usuários potenciais
This work is a part of a larger project, the Annotation and Markup Project, which aims to create a medical knowledge base about radiological images to identify, monitor and reason about tumors in cancer research and medical practices. This project is being developed in conjunction with the Laboratory of Image Informatics at Stanford University. The specific problem that will be addressed in this work is that most of the semantic information about radiological images are not captured and related to them using terms of biomedical ontologies and standards, such as RadLex or DICOM, what makes it impossible to automatic evaluate them by computers, to search for them in hospital databases using semantic criteria, etc. To address this issue, radiologists need an easy, intuitive and affordable computational solution to add this semantic information. In this work, a web solution for adding the information was developed, the ePAD system. It allows the retrieval of medical images, such as images available in hospital information systems (PACS), the creation of contours around tumor lesions, the association of ontological terms to these contours, and the storage of this terms in a knowledge base. The main challenges of this work involved the creation of intuitive interfaces using Rich Internet Applications technology and the operation from a standard Web Browser. The first functional prototype of ePAD reached its goal of proving its technical feasibility. It was able to do the same basic annotation job of desktop applications, such as OsiriX-iPad, without the same overhead. It also showed to the medical community that it was a useful tool and that generated interest of potential early users
APA, Harvard, Vancouver, ISO, and other styles
23

Al, Mazari Ali. "Computational methods for the analysis of HIV drug resistance dynamics." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/1907.

Full text
Abstract:
ABSTRACT Despite the extensive quantitative and qualitative knowledge about therapeutic regimens and the molecular biology of HIV/AIDS, the eradication of HIV infection cannot be achieved with available antiretroviral regimens. HIV drug resistance remains the most challenging factor in the application of approved antiretroviral agents. Previous investigations and existing HIV/AIDS models and algorithms have not enabled the development of long-lasting and preventive drug agents. Therefore, the analysis of the dynamics of drug resistance and the development of sophisticated HIV/AIDS analytical algorithms and models are critical for the development of new, potent antiviral agents, and for the greater understanding of the evolutionary behaviours of HIV. This study presents novel computational methods for the analysis of drug-resistance dynamics, including: viral sequences, phenotypic resistance, immunological and virological responses and key clinical data, from HIV-infected patients at Royal Prince Alfred Hospital in Sydney. The lability of immunological and virological responses is analysed in the context of the evolution of antiretroviral drug-resistance mutations. A novel Bayesian algorithm is developed for the detection and classification of neutral and adaptive mutational patterns associated with HIV drug resistance. To simplify and provide insights into the multifactorial interactions between viral populations, immune-system cells, drug resistance and treatment parameters, a Bayesian graphical model of drug-resistance dynamics is developed; the model supports the exploration of the interdependent associations among these dynamics.
APA, Harvard, Vancouver, ISO, and other styles
24

Al, Mazari Ali. "Computational methods for the analysis of HIV drug resistance dynamics." Connect to full text, 2007. http://hdl.handle.net/2123/1907.

Full text
Abstract:
Doctor of Philosophy(PhD)
ABSTRACT Despite the extensive quantitative and qualitative knowledge about therapeutic regimens and the molecular biology of HIV/AIDS, the eradication of HIV infection cannot be achieved with available antiretroviral regimens. HIV drug resistance remains the most challenging factor in the application of approved antiretroviral agents. Previous investigations and existing HIV/AIDS models and algorithms have not enabled the development of long-lasting and preventive drug agents. Therefore, the analysis of the dynamics of drug resistance and the development of sophisticated HIV/AIDS analytical algorithms and models are critical for the development of new, potent antiviral agents, and for the greater understanding of the evolutionary behaviours of HIV. This study presents novel computational methods for the analysis of drug-resistance dynamics, including: viral sequences, phenotypic resistance, immunological and virological responses and key clinical data, from HIV-infected patients at Royal Prince Alfred Hospital in Sydney. The lability of immunological and virological responses is analysed in the context of the evolution of antiretroviral drug-resistance mutations. A novel Bayesian algorithm is developed for the detection and classification of neutral and adaptive mutational patterns associated with HIV drug resistance. To simplify and provide insights into the multifactorial interactions between viral populations, immune-system cells, drug resistance and treatment parameters, a Bayesian graphical model of drug-resistance dynamics is developed; the model supports the exploration of the interdependent associations among these dynamics.
APA, Harvard, Vancouver, ISO, and other styles
25

Campos, David Emmanuel Marques. "Mining biomedical information from scientific literature." Doctoral thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12853.

Full text
Abstract:
Doutoramento conjunto MAP-i
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
A rápida evolução e proliferação de uma rede mundial de computadores, a Internet, resultou num esmagador e constante crescimento na quantidade de dados e informação publicamente disponíveis, o que também se verificou na biomedicina. No entanto, a inexistência de estrutura em dados textuais inibe o seu processamento direto por parte de soluções informatizadas. Extração de informação é a tarefa de mineração de texto que pretende extrair automaticamente informação de fontes de dados de texto não estruturados. O objetivo do trabalho descrito nesta tese foi essencialmente focado em construir soluções inovadoras para extração de informação biomédica a partir da literatura científica, através do desenvolvimento de aplicações simples de usar por programadores e bio-curadores, capazes de fornecer resultados mais precisos, usáveis e de forma mais rápida. Começámos por abordar o reconhecimento de nomes de conceitos - uma tarefa inicial e fundamental - com o desenvolvimento de Gimli, uma solução baseada em inteligência artificial que aplica uma estratégia incremental para otimizar as características linguísticas extraídas do texto para cada tipo de conceito. Posteriormente, Totum foi implementado para harmonizar nomes de conceitos provenientes de sistemas heterogéneos, oferecendo uma solução mais robusta e com melhores resultados. Esta aproximação recorre a informação contida em corpora heterogéneos para disponibilizar uma solução não restrita às característica de um único corpus. Uma vez que as soluções anteriores não oferecem ligação dos nomes a bases de conhecimento, Neji foi construído para facilitar o desenvolvimento de soluções complexas e personalizadas para o reconhecimento de conceitos nomeados e respectiva normalização. Isto foi conseguido através de uma plataforma modular e flexível focada em rapidez e desempenho, integrando um vasto conjunto de módulos de processamento optimizados para o domínio biomédico. De forma a disponibilizar identificação de conceitos biomédicos em tempo real, BeCAS foi desenvolvido para oferecer um serviço, aplicação e widget Web. A extracção de relações entre conceitos também foi abordada através do desenvolvimento de TrigNER, uma solução baseada em inteligência artificial para o reconhecimento de palavras que desencadeiam a ocorrência de eventos biomédicos. Esta ferramenta aplica um algoritmo automático para encontrar as melhores características linguísticas e parâmetros para cada tipo de evento. Finalmente, de forma a auxiliar o trabalho de bio-curadores, Egas foi desenvolvido para suportar a anotação rápida, interactiva e colaborativa em tempo real de documentos biomédicos, através da anotação manual e automática de conceitos e relações de forma contextualizada. Resumindo, este trabalho contribuiu para a actualização mais precisa das actuais bases de conhecimento, auxiliando a formulação de hipóteses e a descoberta de novo conhecimento.
APA, Harvard, Vancouver, ISO, and other styles
26

Krive, Jacob. "Effectiveness of Evidence-Based Computerized Physician Order Entry Medication Order Sets Measured by Health Outcomes." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/202.

Full text
Abstract:
In the past three years, evidence based medicine emerged as a powerful force in an effort to improve quality and health outcomes, and to reduce cost of care. Computerized physician order entry (CPOE) applications brought safety and efficiency features to clinical settings, including ease of ordering medications via pre-defined sets. Order sets offer promise of standardized care beyond convenience features through evidence-based practices built upon a growing and powerful knowledge of clinical professionals to achieve potentially more consistent health outcomes with patients and to reduce frequency of medical errors, adverse drug effects, and unintended side effects during treatment. While order sets existed in paper form prior to the introduction of CPOE, their true potential was only unleashed with support of clinical informatics, at those healthcare facilities that installed CPOE systems and reap rewards of standardized care. Despite ongoing utilization of order sets at facilities that implemented CPOE, there is a lack of quantitative evidence behind their benefits. Comprehensive research into their impact requires a history of electronic medical records necessary to produce large population samples to achieve statistically significant results. The study, conducted at a large Midwest healthcare system consisting of several community and academic hospitals, was aimed at quantitatively analyzing benefits of the order sets applied to prevent venous thromboembolism (VTE) and treat pneumonia, congestive heart failure (CHF), and acute myocardial infarction (AMI) - testing hospital mortality, readmission, complications, and length of stay (LOS) as health outcomes. Results indicated reduction of acute VTE rates among non-surgical patients in the experimental group, while LOS and complications benefits were inconclusive. Pneumonia patients in the experimental group had lower mortality, readmissions, LOS, and complications rates. CHF patients benefited from order sets in terms of mortality and LOS, while there was no sufficient data to display results for readmissions and complications. Utilization of AMI order sets was insufficient to produce statistically significant results. Results will (1) empower health providers with evidence to justify implementation of order sets due to their effectiveness in driving improvements in health outcomes and efficiency of care and (2) provide researchers with new ideas to conduct health outcomes research.
APA, Harvard, Vancouver, ISO, and other styles
27

Botelho, Maria Lucia de Azevedo. "Concepção, desenvolvimento e avaliação de um sistema de ensino virtual." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261133.

Full text
Abstract:
Orientador: Saide Jorge Calil
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-08T01:44:24Z (GMT). No. of bitstreams: 1 Botelho_MariaLuciadeAzevedo_D.pdf: 2737756 bytes, checksum: 9bd75c71b8b603eca3b3c8d5482e2050 (MD5) Previous issue date: 2006
Resumo: Um detalhado levantamento de programas de computador aplicados ao ensino foi realizado tendo como objetivo conhecer os recursos disponíveis no mercado. Paralelamente, foi feita uma pesquisa bibliográfica buscando as necessidades da comunidade acadêmica em termos de recursos computacionais. Foi constatado, então, que apesar de serem consideradas importantes, havia poucas alternativas para a realização de aulas virtuais que demandassem pequeno esforço operacional e recursos simples de infra-estrutura para utilização. O objetivo do trabalho foi definido então, como sendo desenvolver e avaliar um sistema que viabiliza a realização de aulas virtuais de dois tipos, as on-line e as off-line, exigindo pouca experiência de informática dos usuários, e adequado aos recursos mais comumente disponíveis nas universidades. Alguns requisitos básicos de funcionalidade foram definidos, visando dotar o sistema com máxima facilidade de operação, como por exemplo, interface padrão, ou seja, aparência idêntica em todas as operações, ajuda disponível em todos os níveis e permitir aproveitamento de shows de slides e textos já existentes. Foram elaboradas duas plataformas de trabalho, uma para o professor, que permite criar, alterar e realizar uma aula, e outra plataforma para o aluno, que permite assistir à aula. A avaliação do sistema foi realizada através da execução de dois Planos de Testes, que utilizaram instrumentos padronizados como os Critérios de Avaliação de Qualidade de Software, aos quais foram atribuídas notas, e os Questionários de Avaliação do sistema, que foram preenchidos pelos professores e pelos alunos envolvidos. As aulas off-line obtiveram notas máximas em todos os quesitos, e as aulas on-line obtiveram médias acima de 1,78 (numa escala de 0 a 2). Todos os professores responderam que gostaram de realizar as aulas utilizando o sistema; 75% disseram que gostariam de empregá-lo em seu trabalho e 25% disseram que talvez pudessem utilizá-lo. Dentre os alunos, somente 2,33% responderam que não gostaram da aula virtual, e 4,65% informaram que não gostariam de ter mais aulas realizadas com o sistema no seu curso. Foi desenvolvida uma Discussão sobre os motivos que resultaram no pior desempenho das aulas on-line, e a principal causa detectada foi a dificuldade de realização deste tipo de evento utilizando a Internet comercial, que apresenta problemas de grande volume de tráfego de dados. Dentre as conclusões apresentadas, destaca-se que o VirtuAula é uma interessante alternativa para instituições de ensino público brasileiras, pois sua aplicação é original, não se encontrando similares nacionais com todas as funcionalidades reunidas, e por ter baixo custo operacional, não apresentando ônus nem risco de contravenção, por haver a possibilidade de cessão gratuita de uso
Abstract: A detailed search for digital programs applied to teaching processes was performed to identify the available resources on the world market. It was also carried out a survey on public and private libraries looking for the requirements of the academic community regarding computational resources to such purpose. It was found that although considered important, there were few alternatives for the development of virtual classes that demands little operational effort as well as a simplified infrastructure for its use. The goal of this work is to develop and assess a system - VirtuAula - to assemble and present on-line and off-line virtual classes, requiring low experience on informatics for its users as well as being adequate to common resources available in universities. The basic functional requirements defined for developing the VirtuAula were a standard interface, which means identical browse for all operations, a user-friendly help desk and the possibility to use already prepared slide shows and texts. Two work platforms were elaborated, one for teachers, which allow them to create, change and carry out the virtual class, and a second platform for students to attend this virtual class. For the system assessment two tests plans were used; standard tools as the Evaluation Criteria of the Software Quality (grading method), and questionnaires that were filled by the involved teachers and students. The off-line classes reached the maximum grading score in all the evaluation topics, while on-line classes reached an average score over 1,78 (in a 0-2 scale). All the involved teachers answered that they would like to carry out virtual classes using this system; 75% declared that they would like to use it for their work, and 25% declared that they could use it. Among the students, only 2,33% dislike the virtual classes using the VirtuAula while 4,65% informed that they would not like to have such kind of classes in their courses. Looking for the reas ons for the lower performance of on-line classes in this survey, the major cause was the difficulty to carry out such event on the present commercial Internet system due to its low performance during very heavy data transfer. Among the conclusions presented here, it can be depicted that the system is an interesting alternative tool for public schools in Brazil due to its originality (no similar software), low cost and user free possibility
Doutorado
Engenharia Biomedica
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
28

GUDIVADA, RANGA CHANDRA. "DISCOVERY AND PRIORITIZATION OF BIOLOGICAL ENTITIES UNDERLYING COMPLEX DISORDERS BY PHENOME-GENOME NETWORK INTEGRATION." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1195161740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rios, Anthony. "Deep Neural Networks for Multi-Label Text Classification: Application to Coding Electronic Medical Records." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/71.

Full text
Abstract:
Coding Electronic Medical Records (EMRs) with diagnosis and procedure codes is an essential task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient’s well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. Therefore, it is necessary to develop automated diagnosis and procedure code recommendation methods that can be used by professional medical coders. The main difficulty with developing automated EMR coding methods is the nature of the label space. The standardized vocabularies used for medical coding contain over 10 thousand codes. The label space is large, and the label distribution is extremely unbalanced - most codes occur very infrequently, with a few codes occurring several orders of magnitude more than others. A few codes never occur in training dataset at all. In this work, we present three methods to handle the large unbalanced label space. First, we study how to augment EMR training data with biomedical data (research articles indexed on PubMed) to improve the performance of standard neural networks for text classification. PubMed indexes more than 23 million citations. Many of the indexed articles contain relevant information about diagnosis and procedure codes. Therefore, we present a novel method of incorporating this unstructured data in PubMed using transfer learning. Second, we combine ideas from metric learning with recent advances in neural networks to form a novel neural architecture that better handles infrequent codes. And third, we present new methods to predict codes that have never appeared in the training dataset. Overall, our contributions constitute advances in neural multi-label text classification with potential consequences for improving EMR coding.
APA, Harvard, Vancouver, ISO, and other styles
30

Cabral, Braulio J. "Exploring Factors Influencing Information Technology Portfolio Selection Process in Government-Funded Bioinformatics Projects." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2957.

Full text
Abstract:
In 2012, the National Cancer Institute's (NCI) Board of Scientific Advisors (BSA) conducted a review of the Center for Biomedical Informatics and Information Technology's (CBIIT) bioinformatics program. The BSA suggested that the lack of a formal project selection process made it difficult to determine the alignment of projects with the mission of the organization. The problem addressed by this study was that CBIIT did not have an in-depth understanding of the project selection process and the factors influencing the process. The purpose of this study was to understand the project selection process at CBIIT. The research methodology was an exploratory case study. The data collection process included a phenomenological interview of 25 managers from program management, engineering, scientific computing, informatics program, and health sciences. The data analysis consisted of coding for themes, sensitizing, and heuristic coding, supported by a theoretical framework that included the technology acceptance model, the program evaluation theory, and decision theory. The analysis revealed the need for formal project portfolio governance, the lack of a predefined project selection process, and that the decision-making process was circumstantial. The study also revealed six major themes that affected the decision-making process: the CBIIT mission, the organizational culture, leadership, governance, funding, and organizational change. Finally, the study fills the gap in the literature regarding the project selection process for government-funded initiatives in information technologies. This study may contribute to positive social change by improving the project selection process at CBIIT, allowing for the effective use of public funds for cancer informatics researchers.
APA, Harvard, Vancouver, ISO, and other styles
31

Vlachos, Andreas. "Semi-supervised learning for biomedical information extraction." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nelson, Justin. "The Development of a Human Operator Informatic Model (HOIM) incorporating the Effects of Non-Invasive Brain Stimulation on Information Processing while performing Multi-Attribute Task Battery (MATB)." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1461066834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jilkine, Petr. "Application of information fusion methods to biomedical data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23615.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Yufan. "Automatic analysis of information structure in biomedical literature." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Thomas, Philippe. "Robust relationship extraction in the biomedical domain." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17372.

Full text
Abstract:
Seit Jahrhunderten wird menschliches Wissen in Form von natürlicher Sprache ausgetauscht und in Dokumenten schriftlich aufgezeichnet. In den letzten Jahren konnte man auf dem Gebiet der Lebenswissenschaften eine exponentielle Zunahme wissenschaftlicher Publikationen beobachten. Diese Dissertation untersucht die automatische Extraktion von Beziehungen zwischen Eigennamen. Innerhalb dieses Gebietes beschäftigt sich die Arbeit mit der Steigerung der Robustheit für die Relationsextraktion. Zunächst wird der Einsatz von Ensemble-Methoden anhand von Daten aus der "Drug-drug-interaction challenge 2013" evaluiert. Ensemble-Methoden erhöhen die Robustheit durch Aggregation unterschiedlicher Klassifikationssysteme zu einem Modell. Weiterhin wird in dieser Arbeit das Problem der Relationsextraktion auf Dokumenten mit unbekannten Texteigenschaften beschrieben. Es wird gezeigt, dass die Verwendung des halb-überwachten Lernverfahrens self training in solchen Fällen eine höhere Robustheit erzielt als die Nutzung eines Klassifikators, der lediglich auf einem manuell annotierten Korpus trainiert wurde. Zur Ermittlung der Robustheit wird das Verfahren des cross-learnings verwendet. Zuletzt wird die Verwendung von distant-supervision untersucht. Korpora, welche mit der distant-supervision-Methode erzeugt wurden, weisen ein inhärentes Rauschen auf und profitieren daher von robusten Relationsextraktionsverfahren. Es werden zwei verschiedene Methoden untersucht, die auf solchen Korpora trainiert werden. Beide Ansätze zeigen eine vergleichbare Leistung wie vollständig überwachte Klassifikatoren, welche mit dem cross-learning-Verfahren evaluiert wurden. Um die Nutzung von Ergebnissen der Informationsextraktion zu erleichtern, wurde die semantische Suchmaschine GeneView entwickelt. Anforderungen an die Rechenkapazität beim Erstellen von GeneView werden diskutiert und Anwendungen auf den von verschiedenen Text-Mining-Komponenten extrahierten Daten präsentiert.
For several centuries, a great wealth of human knowledge has been communicated by natural language, often recorded in written documents. In the life sciences, an exponential increase of scientific articles has been observed, hindering the effective and fast reconciliation of previous finding into current research projects. This thesis studies the automatic extraction of relationships between named entities. Within this topic, it focuses on increasing robustness for relationship extraction. First, we evaluate the use of ensemble methods to improve performance using data provided by the drug-drug-interaction challenge 2013. Ensemble methods aggregate several classifiers into one model, increasing robustness by reducing the risk of choosing an inappropriate single classifier. Second, this work discusses the problem of applying relationship extraction to documents with unknown text characteristics. Robustness of a text mining component is assessed by cross-learning, where a model is evaluated on a corpus different from the training corpus. We apply self-training, a semi-supervised learning technique, in order to increase cross-learning performance and show that it is more robust in comparison to a classifier trained on manually annotated text only. Third, we investigate the use of distant supervision to overcome the need of manually annotated training instances. Corpora derived by distant supervision are inherently noisy, thus benefiting from robust relationship extraction methods. We compare two different methods and show that both approaches achieve similar performance as fully supervised classifiers, evaluated in the cross-learning scenario. To facilitate the usage of information extraction results, including those developed within this thesis, we develop the semantic search engine GeneView. We discuss computational requirements to build this resource and present some applications utilizing the data extracted by different text-mining components.
APA, Harvard, Vancouver, ISO, and other styles
36

Sahoo, Satya Sanket. "Semantic Provenance: Modeling, Querying, and Application in Scientific Discovery." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1282847715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Koroleva, Anna. "Assisted authoring for avoiding inadequate claims in scientific reporting." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS021.

Full text
Abstract:
Dans cette thèse, nous présentons notre travail sur le développement d’algorithmes de traitement automatique des langues (TAL) pour aider les lecteurs et les auteurs d’articles scientifiques (biomédicaux) à détecter le spin (présentation inadéquate des résultats de recherche). Notre algorithme se concentre sur le spin dans les résumés d’articles rapportant des essais contrôlés randomisés.Nous avons étudié le phénomène de ” spin ” du point de vue linguistique pour créer une description de ses caractéristiques textuelles. Nous avons annoté des corpus pour les tâches principales de notre chaîne de traitement pour la détection de spin: extraction des résultats —en anglais ” outcomes ” —déclarés (primaires) et rapportés, évaluation de la similarité sémantique des paires de résultats d’essais et extraction des relations entre les résultats rapportés et leurs niveaux de signification statistique. En outre, nous avons annoté deux corpus plus petits pour identifier les déclarations de similarité des traitements et les comparaisons intra-groupe. Nous avons développé et testé un nombre d’algorithmes d’apprentissage automatique et d’algorithmes basés sur des règles pour les tâches principales de la détection de spin (extraction des résultats, évaluation de la similarité des résultats et extraction de la relation résultat-signification statistique). La meilleure performance a été obtenues par une approche d’apprentissage profond qui consiste à adapter les représentations linguistiques pré-apprises spécifiques à un domaine (modèles de BioBERT et SciBERT) à nos tâches. Cette approche a été mise en oeuvre dans notre système prototype de détection de spin, appelé DeSpin, dont le code source est librement accessible sur un serveur public. Notre prototype inclut d’autres algorithmes importants, tels que l’analyse de structure de texte (identification du résumé d’un article,identification de sections dans le résumé), la détection de déclarations de similarité de traitements et de comparaisons intra-groupe, l’extraction de données de registres d’essais. L’identification des sections des résumés est effectuée avec une approche d’apprentissage profond utilisant le modèle BioBERT, tandis que les autres tâches sont effectuées à l’aide d’une approche basée sur des règles. Notre système prototype a une interface simple d’annotation et de visualisation
In this thesis, we report on our work on developing Natural Language Processing (NLP) algorithms to aid readers and authors of scientific (biomedical) articles in detecting spin (distorted presentation of research results). Our algorithm focuses on spin in abstracts of articles reporting Randomized Controlled Trials (RCTs). We studied the phenomenon of spin from the linguistic point of view to create a description of its textual features. We annotated a set of corpora for the key tasks of our spin detection pipeline: extraction of declared (primary) and reported outcomes, assessment of semantic similarity of pairs of trial outcomes, and extraction of relations between reported outcomes and their statistical significance levels. Besides, we anno-tated two smaller corpora for identification of statements of similarity of treatments and of within-group comparisons. We developed and tested a number of rule-based and machine learning algorithmsforthe key tasksof spindetection(outcome extraction,outcome similarity assessment, and outcome-significance relation extraction). The best performance was shown by a deep learning approach that consists in fine-tuning deep pre-trained domain-specific language representations(BioBERT and SciBERT models) for our downstream tasks. This approach was implemented in our spin detection prototype system, called De-Spin, released as open source code. Our prototype includes some other important algorithms, such as text structure analysis (identification of the abstract of an article, identification of sections within the abstract), detection of statements of similarity of treatments and of within-group comparisons, extraction of data from trial registries. Identification of abstract sections is performed with a deep learning approach using the fine-tuned BioBERT model, while other tasks are performed using a rule-based approach. Our prototype system includes a simple annotation and visualization interface
APA, Harvard, Vancouver, ISO, and other styles
38

Johannsson, Dagur Valberg. "Biomedical Information Retrieval based on Document-Level Term Boosting." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8981.

Full text
Abstract:

There are several problems regarding information retrieval on biomedical information. The common methods for information retrieval tend to fall short when searching in this domain. With the ever increasing amount of information available, researchers have widely agreed on that means to precisely retrieve needed information is vital to use all available knowledge. We have in an effort to increase the precision of retrieval within biomedical information created an approach to give all terms in a document a context weight based on the contexts domain specific data. We have created a means of including our context weights in document ranking, by combining the weights with existing ranking models. Combining context weights with existing models has given us document-level term boosting, where the context of the queried terms within a document will positively or negatively affect the documents ranking score. We have tested out our approach by implementing a full search engine prototype and evaluatied it on a document collection within biomedical domain. Our work shows that this type of score boosting has little effect on overall retrieval precision. We conclude that the approach we have created, as implemented in our prototype, not to necessarily be good means of increasing precision in biomedical retrieval systems.

APA, Harvard, Vancouver, ISO, and other styles
39

Canevet, Catherine. "Automating the gathering of relevant information from biomedical text." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3849.

Full text
Abstract:
More and more, database curators rely on literature-mining techniques to help them gather and make use of the knowledge encoded in text documents. This thesis investigates how an assisted annotation process can help and explores the hypothesis that it is only with respect to full-text publications that a system can tell relevant and irrelevant facts apart by studying their frequency. A semi-automatic annotation process was developed for a particular database - the Nuclear Protein Database (NPD), based on a set of full-text articles newly annotated with regards to subnuclear protein localisation, along with eight lexicons. The annotation process is carried out online, retrieving relevant documents (abstracts and full-text papers) and highlighting sentences of interest in them. The process also offers a summary Table of the facts found clustered by type of information. Each method involved in each step of the tool is evaluated using cross-validation results on the training data as well as test set results. The performance of the final tool, called the “NPD Curator System Interface”, is estimated empirically in an experiment where the NPD curator updates the database with pieces of information found relevant in 31 publications using the interface. A final experiment complements our main methodology by showing its extensibility to retrieving information on protein function rather than localisation. I argue that the general methods, the results they produced and the discussions they engendered are useful for any subsequent attempt to generate semi-automatic database annotation processes. The annotated corpora, gazetteers, methods and tool are fully available on request of the author (catherine.canevet@bbsrc.ac.uk).
APA, Harvard, Vancouver, ISO, and other styles
40

Nunes, Tiago Santos Barata. "A sentence-based information retrieval system for biomedical corpora." Master's thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12698.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
O desenvolvimento de novos métodos experimentais e tecnologias de alto rendimento no campo biomédico despoletou um crescimento acelerado do volume de publicações científicas na área. Inúmeros repositórios estruturados para dados biológicos foram criados ao longo das últimas décadas, no entanto, os utilizadores estão cada vez mais a recorrer a sistemas de recuperação de informação, ou motores de busca, em detrimento dos primeiros. Motores de pesquisa apresentam-se mais fáceis de usar devido à sua flexibilidade e capacidade de interpretar os requisitos dos utilizadores, tipicamente expressos na forma de pesquisas compostas por algumas palavras. Sistemas de pesquisa tradicionais devolvem documentos completos, que geralmente requerem um grande esforço de leitura para encontrar a informação procurada, encontrando-se esta, em grande parte dos casos, descrita num trecho de texto composto por poucas frases. Além disso, estes sistemas falham frequentemente na tentativa de encontrar a informação pretendida porque, apesar de a pesquisa efectuada estar normalmente alinhada semanticamente com a linguagem usada nos documentos procurados, os termos usados são lexicalmente diferentes. Esta dissertação foca-se no desenvolvimento de técnicas de recuperação de informação baseadas em frases que, para uma dada pesquisa de um utilizador, permitam encontrar frases relevantes da literatura científica que respondam aos requisitos do utilizador. O trabalho desenvolvido apresenta-se em duas partes. Primeiro foi realizado trabalho de investigação exploratória para identificação de características de frases informativas em textos biomédicos. Para este propósito foi usado um método de aprendizagem automática. De seguida foi desenvolvido um sistema de pesquisa de frases informativas. Este sistema suporta pesquisas de texto livre e baseadas em conceitos, os resultados de pesquisa apresentam-se enriquecidos com anotações de conceitos relevantes e podem ser ordenados segundo várias estratégias de classificação.
Modern advances of experimental methods and high-throughput technology in the biomedical domain are causing a fast-paced, rising growth of the volume of published scientific literature in the field. While a myriad of structured data repositories for biological knowledge have been sprouting over the last decades, Information Retrieval (IR) systems are increasingly replacing them. IR systems are easier to use due to their flexibility and ability to interpret user needs in the form of queries, typically formed by a few words. Traditional document retrieval systems return entire documents, which may require a lot of subsequent reading to find the specific information sought, frequently contained in a small passage of only a few sentences. Additionally, IR often fails to find what is wanted because the words used in the query are lexically different, despite semantically aligned, from the words used in relevant sources. This thesis focuses on the development of sentence-based information retrieval approaches that, for a given user query, allow seeking relevant sentences from scientific literature that answer the user information need. The presented work is two-fold. First, exploratory research experiments were conducted for the identification of features of informative sentences from biomedical texts. A supervised machine learning method was used for this purpose. Second, an information retrieval system for informative sentences was developed. It supports free text and concept-based queries, search results are enriched with relevant concept annotations and sentences can be ranked using multiple configurable strategies.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Lawrence Chet-Lun. "Text mining of point mutation information from biomedical literature." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3339194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Leroy, Gondy, Hsinchun Chen, Jesse D. Martinez, Shauna Eggers, Ryan R. Falsey, Kerri L. Kislin, Zan Huang, et al. "Genescene: Biomedical Text And Data Mining." Wiley Periodicals, Inc, 2005. http://hdl.handle.net/10150/105791.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
To access the content of digital texts efficiently, it is necessary to provide more sophisticated access than keyword based searching. Genescene provides biomedical researchers with research findings and background relations automatically extracted from text and experimental data. These provide a more detailed overview of the information available. The extracted relations were evaluated by qualified researchers and are precise. A qualitative ongoing evaluation of the current online interface indicates that this method to search the literature is more useful and efficient than keyword based searching.
APA, Harvard, Vancouver, ISO, and other styles
43

Hakenberg, Jörg. "Mining relations from the biomedical literature." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://dx.doi.org/10.18452/16073.

Full text
Abstract:
Textmining beschäftigt sich mit der automatisierten Annotierung von Texten und der Extraktion einzelner Informationen aus Texten, die dann für die Weiterverarbeitung zur Verfügung stehen. Texte können dabei kurze Zusammenfassungen oder komplette Artikel sein, zum Beispiel Webseiten und wissenschaftliche Artikel, umfassen aber auch textuelle Einträge in sonst strukturierten Datenbanken. Diese Dissertationsschrift bespricht zwei wesentliche Themen des biomedizinischen Textmining: die Extraktion von Zusammenhängen zwischen biologischen Entitäten ---das Hauptaugenmerk liegt dabei auf der Erkennung von Protein-Protein-Interaktionen---, und einen notwendigen Vorverarbeitungsschritt, die Erkennung von Proteinnamen. Diese Schrift beschreibt Ziele, Herausforderungen, sowie typische Herangehensweisen für alle wesentlichen Komponenten des biomedizinischen Textmining. Wir stellen eigene Methoden zur Erkennung von Proteinnamen sowie der Extraktion von Protein-Protein-Interaktionen vor. Zwei eigene Verfahren zur Erkennung von Proteinnamen werden besprochen, eines basierend auf einem Klassifikationsproblem, das andere basierend auf Suche in Wörterbüchern. Für die Extraktion von Interaktionen entwickeln wir eine Methode zur automatischen Annotierung großer Mengen von Text im Bezug auf Relationen; diese Annotationen werden dann zur Mustererkennung verwendet, um anschließend die gefundenen Muster auf neuen Text anwenden zu können. Um Muster zu erkennen, berechnen wir Ähnlichkeiten zwischen zuvor gefundenen Sätzen, die denselben Typ von Relation/Interaktion beschreiben. Diese Ähnlichkeiten speichern wir als sogenannte `consensus patterns''. Wir entwickeln eine Alignmentstrategie, die mehrschichtige Annotationen pro Position im Muster erlaubt. In Versuchen auf bekannten Benchmarks zeigen wir empirisch, dass unser vollautomatisches Verfahren Resultate erzielt, die vergleichbar sind mit existierenden Methoden, welche umfangreiche Eingriffe von Experten voraussetzen.
Text mining deals with the automated annotation of texts and the extraction of facts from textual data for subsequent analysis. Such texts range from short articles and abstracts to large documents, for instance web pages and scientific articles, but also include textual descriptions in otherwise structured databases. This thesis focuses on two key problems in biomedical text mining: relationship extraction from biomedical abstracts ---in particular, protein--protein interactions---, and a pre-requisite step, named entity recognition ---again focusing on proteins. This thesis presents goals, challenges, and typical approaches for each of the main building blocks in biomedical text mining. We present out own approaches for named entity recognition of proteins and relationship extraction of protein-protein interactions. For the first, we describe two methods, one set up as a classification task, the other based on dictionary-matching. For relationship extraction, we develop a methodology to automatically annotate large amounts of unlabeled data for relations, and make use of such annotations in a pattern matching strategy. This strategy first extracts similarities between sentences that describe relations, storing them as consensus patterns. We develop a sentence alignment approach that introduces multi-layer alignment, making use of multiple annotations per word. For the task of extracting protein-protein interactions, empirical results show that our methodology performs comparable to existing approaches that require a large amount of human intervention, either for annotation of data or creation of models.
APA, Harvard, Vancouver, ISO, and other styles
44

Reeve, Lawrence H. Han Hyoil. "Semantic annotation and summarization of biomedical text /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Candito, Antonio. "Integrazione informatica dei sistemi di medicina nucleare nel sistema informativo ospedaliero." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/4055/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yoo, Illhoi Hu Xiaohua. "Semantic text mining and its application in biomedical domain /." Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860%20/899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jimeno, Yepes Antonio José. "Ontology refinement for improved information retrieval in the biomedical domain." Doctoral thesis, Universitat Jaume I, 2009. http://hdl.handle.net/10803/384552.

Full text
Abstract:
Este trabajo de tesis doctoral se centra en el uso de ontologías de dominio y su refinamiento enfocado a la recuperación de la información. El dominio seleccionado ha sido el de la Biomedicina, que dispone de una extensa colección de resúmenes en la base de datos Medline y recursos que facilitan la creación de ontologías muy extensas, tales como MeSH o UMLS. En este trabajo se ha desarrollado también un modelo de formulación de consulta que permite relacionar un modelo de documento con una ontología dentro de los modelos de lenguaje. Además hemos desarrollado un algoritmo que permite mejorar la ontología para la tarea de recuperación de la información a partir de recursos no estructurados. Los resultados muestran que el refinamiento de las ontologías aplicado a la recuperación de la información mejora el rendimiento, identificando automáticamente información no presente en la ontología. Además hemos comprobado que el tipo de contenido relevante para las consultas depende de propiedades relacionadas con el tipo de consulta y la colección de documentos. Los resultados están acordes con resultados existentes en el campo de la recuperación de la información.
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Zhiguo. "Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/33.

Full text
Abstract:
Given that data is increasing exponentially everyday, extracting and understanding the information, themes and relationships from large collections of documents is more and more important to researchers in many areas. In this paper, we present a cooperative semantic information processing system to help biomedical researchers understand and discover knowledge in large numbers of titles and abstracts from PubMed query results. Our system is based on a prevalent technique, topic modeling, which is an unsupervised machine learning approach for discovering the set of semantic themes in a large set of documents. In addition, we apply a natural language processing technique to transform the “bag-of-words” assumption of topic models to the “bag-of-important-phrases” assumption and build an interactive visualization tool using a modified, open-source, Topic Browser. In the end, we conduct two experiments to evaluate the approach. The first, evaluates whether the “bag-of-important-phrases” approach is better at identifying semantic themes than the standard “bag-of-words” approach. This is an empirical study in which human subjects evaluate the quality of the resulting topics using a standard “word intrusion test” to determine whether subjects can identify a word (or phrase) that does not belong in the topic. The second is a qualitative empirical study to evaluate how well the system helps biomedical researchers explore a set of documents to discover previously hidden semantic themes and connections. The methodology for this study has been successfully used to evaluate other knowledge-discovery tools in biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
49

Grother, Ethan Mark. "Mobile device reference apps to monitor and display biomedical information." Thesis, Kansas State University, 2017. http://hdl.handle.net/2097/35488.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Steven Warren
Smart phones and other mobile technologies can be used to collect and display physiological information from subjects in various environments – clinical or otherwise. This thesis highlights software app reference designs that allow a smart phone to receive, process, and display biomedical data. Two research projects, described below and in the thesis body, guided this development. Android Studio was chosen to develop the phone application, after exploring multiple development options (including a cross-platform development tool), because it reduced the development time and the number of required programming languages. The first project, supported by the Kansas State University Johnson Cancer Research Center (JCRC), required a mobile device software application that could determine the hemoglobin level of a blood sample based on the most prevalent color in an image acquired by a phone camera, where the image is the result of a chemical reaction between the blood sample and a reagent. To calculate the hemoglobin level, a circular region of interest is identified from within the original image using image processing, and color information from that region of interest is input to a model that provides the hemoglobin level. The algorithm to identify the region of interest is promising but needs additional development to work properly at different image resolutions. The associated model also needs additional work, as described in the text. The second project, in collaboration with Heartspring, Wichita, KS, required a mobile application to display information from a sensor bed used to gather nighttime physiological data from severely disabled autistic children. In this case, a local data server broadcasts these data over a wireless network. The phone application gathers information about the bed over this wireless network and displays these data in user-friendly manner. This approach works well when sending basic information but experiences challenges when sending images. Future work for both project applications includes error handling and user interface improvements. For the JCRC application, a better way to account for image resolution changes needs to be developed, in addition to a means to determine whether the region of interest is valid. For the Heartspring application, future work should include improving image transmissions.
APA, Harvard, Vancouver, ISO, and other styles
50

Tan, He. "Aligning and Merging Biomedical Ontologies." Licentiate thesis, Linköping : Univ, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography