Dissertations / Theses on the topic 'Biomedical Informatics'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Biomedical Informatics.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Moffitt, Richard Austin. "Quality control for translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/34721.
Full textStokes, Todd Hamilton. "Development of a visualization and information management platform in translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33967.
Full textCao, Xi Hang. "On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/586006.
Full textPh.D.
Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets.
Temple University--Theses
Koay, Pei P. "(Re)presenting Human Population Database Projects: virtually designing and siting biomedical informatics ventures." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/27709.
Full textPh. D.
Samuel, Jarvie John. "Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule Discovery." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30508/.
Full textRadovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature." Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.
Full textMilosevic, Nikola. "A multi-layered approach to information extraction from tables in biomedical documents." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/a-multilayered-approach-to-information-extraction-from-tables-in-biomedical-documents(c2edce9c-ae7f-48fa-81c2-14d4bb87423e).html.
Full textRaje, Satyajeet. "ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical Research." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354657305.
Full textTempleton, James Robert. "Trust and Trustworthiness: A Framework for Successful Design of Telemedicine." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/321.
Full textAdejare, Adeboye A. Jr. "Equiformatics: Informatics Methods and Tools to Investigate and Address Health Disparities and Inequities." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623164833455566.
Full textLei, Xin. "Analyzing “Design + Medical” Collaboration Using Participatory Action Research (PAR): A Case Study of the Oxygen Saturation Data Display Project at Cincinnati Children’s Hospital Medical Center." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427983695.
Full textYvanoff, Marie. "LC sensor for biological tissue characterization /." Online version of thesis, 2008. http://hdl.handle.net/1850/8044.
Full textRahimi, Bahol. "Implementation of Health Information Systems." Licentiate thesis, Linköping University, Linköping University, MDA - Human Computer Interfaces, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15677.
Full textHealthcare organizations now consider increased efficiency, reduced costs, improved patient care and quality of services, and safety when they are planning to implement new information and communication technology (ICT) based applications. However, in spite of enormous investment in health information systems (HIS), no convincing evidence of the overall benefits of HISs yet exists. The publishing of studies that capture the effects of the implementation and use of ICT-based applications in healthcare may contribute to the emergence of an evidence-based health informatics which can be used as a platform for decisions made by policy makers, executives, and clinicians. Health informatics needs further studies identifying the factors affecting successful HIS implementation and capturing the effects of HIS implementation. The purpose of the work presented in this thesis is to increase the available knowledge about the impact of the implementation and use of HISs in healthcare organizations. All the studies included in this thesis used qualitative research methods. A case study design and literature review were performed to collect data.
This thesis’s results highlight an increasing need to share knowledge, find methods to evaluate the impact of investments, and formulate indicators for success. It makes suggestions for developing or extending evaluation methods that can be applied to this area with a multi-actor perspective in order to understand the effects, consequences, and prerequisites that have to be achieved for the successful implementation and use of IT in healthcare. The results also propose that HIS, particularly integrated computer-based patient records (ICPR), be introduced to fulfill a high number of organizational, individualbased, and socio-technical goals at different levels. It is therefore necessary to link the goals that HIS systems are to fulfill in relation to short-term, middle-term, and long-term strategic goals. Another suggestion is that implementers and vendors should direct more attention to what has been published in the area to avoid future failures.
This thesis’s findings outline an updated structure for implementation planning. When implementing HISs in hospital and primary-care environments, this thesis suggests that such strategic actions as management involvement and resource allocation, such tactical action as integrating HIS with healthcare workflow, and such operational actions as user involvement, establishing compatibility between software and hardware, and education and training should be taken into consideration.
Wu, Tsung-Lin. "Classification models for disease diagnosis and outcome analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44918.
Full textTucker, Jennifer. "Motivating Subjects: Data Sharing in Cancer Research." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29022.
Full textPh. D.
Choi, Ickwon. "Computational Modeling for Censored Time to Event Data Using Data Integration in Biomedical Research." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307969890.
Full textSantamaria, Suzanne Lamar. "Development of an ontology of animals in context within the OBO Foundry framework from a SNOMED-CT extension and subset." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32400.
Full textMaster of Science
Zink, Janet A. "Reducing Sepsis Mortality: A Cloud-Based Alert Approach." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5697.
Full textLindblad, Erik. "Designing a framework for simulating radiology information systems." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15211.
Full textIn this thesis, a very flexible framework for simulating RIS is designed to beused for Infobroker testing. Infobroker is an application developed by MawellSvenska AB that connects RIS and PACS to achieve interoperability by enablingimage and journal data transmission between radiology sites. To put the project in context, the field of medical informatics, RIS and PACS systems and common protocols and standards are explored. A proof-of-concept implementation of the proposed design shows its potential and verifies that it works. The thesis concludes that a more specialized approach is preferred.
Ekman, Alexandra. "The use of the World Wide Web in epidemiological research /." Stockholm, 2006. http://diss.kib.ki.se/2006/91-7140-948-3/.
Full textGomez, William Ernesto Ardila. "Desenvolvimento de um sistema eletrônico para gestão de medicamentos não padronizados no Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto da Universidade de São Paulo (HCFMRP-USP)." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/17/17157/tde-06062017-165308/.
Full textIntroduction: Medicines are important elements in health care, especially those covered by the Brazilian Unified Healthcare System - Sistema Único de Saúde (SUS), representing a significant portion of its budget. The health infrastructure linked to Hospital das Clínicas serves throughout the northwest region of State the São Paulo and other parts of the state and the country. It is, therefore, known as a reference center for highly complex treatments and, for this reason, frequently prescribes treatments with expensive drugs. Is estimated that 75.4% of the general budget of HCRP-FMRP-USP complex is dedicated to the acquisition of this type of medication, i.e., not standardized medication (special medication), that has a value of approximately USD $14.434.300 (2015). Therefore, tools for controlling not only the prescription, as well as the acquisition and use, becomes critical to optimize the management of the hospital, aiming to move from a reactive to proactive role, where decision-making is based on a history and on indicators of the cases presented in the complex. Objective: To develop an electronic platform based on the World Wide Web, which allows the management, documentation, traceability and interrelationship between the components of the considered decision chain of nonstandard medicines in the Clinics Hospital of Ribeirão Preto Medical School of the University of Sao Paulo. Methods: Include a software development that has, as main features, tracking, monitoring and control of decision chain of drugs, which are considered special by the institution. This software also allows making decisions, development of indicators in real-time and administrative decisions that require the regulatory control supply system of high cost of medicines in each of its components. Results: Further and improved communication between the pharmacy units, the applicant (physician), the Department of attention to health (DAS) and places from the HC-FMRP-USP complex that integrate the chain of decision of the special drug supply. Moreover, organize a data history, which easily can be implemented to indicators for the assistance plan as guaranteeing the presence of a transforming agent. Conclusions: Developed an electronic platform that enables storage, management and processing of data and information, considering the chain decision of nonstandard medicines supply.
Serique, Kleberson Junio do Amaral. "Anotação de imagens radiológicas usando a web semântica para colaboração científica e clínica." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-10092012-155249/.
Full textThis work is a part of a larger project, the Annotation and Markup Project, which aims to create a medical knowledge base about radiological images to identify, monitor and reason about tumors in cancer research and medical practices. This project is being developed in conjunction with the Laboratory of Image Informatics at Stanford University. The specific problem that will be addressed in this work is that most of the semantic information about radiological images are not captured and related to them using terms of biomedical ontologies and standards, such as RadLex or DICOM, what makes it impossible to automatic evaluate them by computers, to search for them in hospital databases using semantic criteria, etc. To address this issue, radiologists need an easy, intuitive and affordable computational solution to add this semantic information. In this work, a web solution for adding the information was developed, the ePAD system. It allows the retrieval of medical images, such as images available in hospital information systems (PACS), the creation of contours around tumor lesions, the association of ontological terms to these contours, and the storage of this terms in a knowledge base. The main challenges of this work involved the creation of intuitive interfaces using Rich Internet Applications technology and the operation from a standard Web Browser. The first functional prototype of ePAD reached its goal of proving its technical feasibility. It was able to do the same basic annotation job of desktop applications, such as OsiriX-iPad, without the same overhead. It also showed to the medical community that it was a useful tool and that generated interest of potential early users
Al, Mazari Ali. "Computational methods for the analysis of HIV drug resistance dynamics." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/1907.
Full textAl, Mazari Ali. "Computational methods for the analysis of HIV drug resistance dynamics." Connect to full text, 2007. http://hdl.handle.net/2123/1907.
Full textABSTRACT Despite the extensive quantitative and qualitative knowledge about therapeutic regimens and the molecular biology of HIV/AIDS, the eradication of HIV infection cannot be achieved with available antiretroviral regimens. HIV drug resistance remains the most challenging factor in the application of approved antiretroviral agents. Previous investigations and existing HIV/AIDS models and algorithms have not enabled the development of long-lasting and preventive drug agents. Therefore, the analysis of the dynamics of drug resistance and the development of sophisticated HIV/AIDS analytical algorithms and models are critical for the development of new, potent antiviral agents, and for the greater understanding of the evolutionary behaviours of HIV. This study presents novel computational methods for the analysis of drug-resistance dynamics, including: viral sequences, phenotypic resistance, immunological and virological responses and key clinical data, from HIV-infected patients at Royal Prince Alfred Hospital in Sydney. The lability of immunological and virological responses is analysed in the context of the evolution of antiretroviral drug-resistance mutations. A novel Bayesian algorithm is developed for the detection and classification of neutral and adaptive mutational patterns associated with HIV drug resistance. To simplify and provide insights into the multifactorial interactions between viral populations, immune-system cells, drug resistance and treatment parameters, a Bayesian graphical model of drug-resistance dynamics is developed; the model supports the exploration of the interdependent associations among these dynamics.
Campos, David Emmanuel Marques. "Mining biomedical information from scientific literature." Doctoral thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12853.
Full textThe rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
A rápida evolução e proliferação de uma rede mundial de computadores, a Internet, resultou num esmagador e constante crescimento na quantidade de dados e informação publicamente disponíveis, o que também se verificou na biomedicina. No entanto, a inexistência de estrutura em dados textuais inibe o seu processamento direto por parte de soluções informatizadas. Extração de informação é a tarefa de mineração de texto que pretende extrair automaticamente informação de fontes de dados de texto não estruturados. O objetivo do trabalho descrito nesta tese foi essencialmente focado em construir soluções inovadoras para extração de informação biomédica a partir da literatura científica, através do desenvolvimento de aplicações simples de usar por programadores e bio-curadores, capazes de fornecer resultados mais precisos, usáveis e de forma mais rápida. Começámos por abordar o reconhecimento de nomes de conceitos - uma tarefa inicial e fundamental - com o desenvolvimento de Gimli, uma solução baseada em inteligência artificial que aplica uma estratégia incremental para otimizar as características linguísticas extraídas do texto para cada tipo de conceito. Posteriormente, Totum foi implementado para harmonizar nomes de conceitos provenientes de sistemas heterogéneos, oferecendo uma solução mais robusta e com melhores resultados. Esta aproximação recorre a informação contida em corpora heterogéneos para disponibilizar uma solução não restrita às característica de um único corpus. Uma vez que as soluções anteriores não oferecem ligação dos nomes a bases de conhecimento, Neji foi construído para facilitar o desenvolvimento de soluções complexas e personalizadas para o reconhecimento de conceitos nomeados e respectiva normalização. Isto foi conseguido através de uma plataforma modular e flexível focada em rapidez e desempenho, integrando um vasto conjunto de módulos de processamento optimizados para o domínio biomédico. De forma a disponibilizar identificação de conceitos biomédicos em tempo real, BeCAS foi desenvolvido para oferecer um serviço, aplicação e widget Web. A extracção de relações entre conceitos também foi abordada através do desenvolvimento de TrigNER, uma solução baseada em inteligência artificial para o reconhecimento de palavras que desencadeiam a ocorrência de eventos biomédicos. Esta ferramenta aplica um algoritmo automático para encontrar as melhores características linguísticas e parâmetros para cada tipo de evento. Finalmente, de forma a auxiliar o trabalho de bio-curadores, Egas foi desenvolvido para suportar a anotação rápida, interactiva e colaborativa em tempo real de documentos biomédicos, através da anotação manual e automática de conceitos e relações de forma contextualizada. Resumindo, este trabalho contribuiu para a actualização mais precisa das actuais bases de conhecimento, auxiliando a formulação de hipóteses e a descoberta de novo conhecimento.
Krive, Jacob. "Effectiveness of Evidence-Based Computerized Physician Order Entry Medication Order Sets Measured by Health Outcomes." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/202.
Full textBotelho, Maria Lucia de Azevedo. "Concepção, desenvolvimento e avaliação de um sistema de ensino virtual." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261133.
Full textTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-08T01:44:24Z (GMT). No. of bitstreams: 1 Botelho_MariaLuciadeAzevedo_D.pdf: 2737756 bytes, checksum: 9bd75c71b8b603eca3b3c8d5482e2050 (MD5) Previous issue date: 2006
Resumo: Um detalhado levantamento de programas de computador aplicados ao ensino foi realizado tendo como objetivo conhecer os recursos disponíveis no mercado. Paralelamente, foi feita uma pesquisa bibliográfica buscando as necessidades da comunidade acadêmica em termos de recursos computacionais. Foi constatado, então, que apesar de serem consideradas importantes, havia poucas alternativas para a realização de aulas virtuais que demandassem pequeno esforço operacional e recursos simples de infra-estrutura para utilização. O objetivo do trabalho foi definido então, como sendo desenvolver e avaliar um sistema que viabiliza a realização de aulas virtuais de dois tipos, as on-line e as off-line, exigindo pouca experiência de informática dos usuários, e adequado aos recursos mais comumente disponíveis nas universidades. Alguns requisitos básicos de funcionalidade foram definidos, visando dotar o sistema com máxima facilidade de operação, como por exemplo, interface padrão, ou seja, aparência idêntica em todas as operações, ajuda disponível em todos os níveis e permitir aproveitamento de shows de slides e textos já existentes. Foram elaboradas duas plataformas de trabalho, uma para o professor, que permite criar, alterar e realizar uma aula, e outra plataforma para o aluno, que permite assistir à aula. A avaliação do sistema foi realizada através da execução de dois Planos de Testes, que utilizaram instrumentos padronizados como os Critérios de Avaliação de Qualidade de Software, aos quais foram atribuídas notas, e os Questionários de Avaliação do sistema, que foram preenchidos pelos professores e pelos alunos envolvidos. As aulas off-line obtiveram notas máximas em todos os quesitos, e as aulas on-line obtiveram médias acima de 1,78 (numa escala de 0 a 2). Todos os professores responderam que gostaram de realizar as aulas utilizando o sistema; 75% disseram que gostariam de empregá-lo em seu trabalho e 25% disseram que talvez pudessem utilizá-lo. Dentre os alunos, somente 2,33% responderam que não gostaram da aula virtual, e 4,65% informaram que não gostariam de ter mais aulas realizadas com o sistema no seu curso. Foi desenvolvida uma Discussão sobre os motivos que resultaram no pior desempenho das aulas on-line, e a principal causa detectada foi a dificuldade de realização deste tipo de evento utilizando a Internet comercial, que apresenta problemas de grande volume de tráfego de dados. Dentre as conclusões apresentadas, destaca-se que o VirtuAula é uma interessante alternativa para instituições de ensino público brasileiras, pois sua aplicação é original, não se encontrando similares nacionais com todas as funcionalidades reunidas, e por ter baixo custo operacional, não apresentando ônus nem risco de contravenção, por haver a possibilidade de cessão gratuita de uso
Abstract: A detailed search for digital programs applied to teaching processes was performed to identify the available resources on the world market. It was also carried out a survey on public and private libraries looking for the requirements of the academic community regarding computational resources to such purpose. It was found that although considered important, there were few alternatives for the development of virtual classes that demands little operational effort as well as a simplified infrastructure for its use. The goal of this work is to develop and assess a system - VirtuAula - to assemble and present on-line and off-line virtual classes, requiring low experience on informatics for its users as well as being adequate to common resources available in universities. The basic functional requirements defined for developing the VirtuAula were a standard interface, which means identical browse for all operations, a user-friendly help desk and the possibility to use already prepared slide shows and texts. Two work platforms were elaborated, one for teachers, which allow them to create, change and carry out the virtual class, and a second platform for students to attend this virtual class. For the system assessment two tests plans were used; standard tools as the Evaluation Criteria of the Software Quality (grading method), and questionnaires that were filled by the involved teachers and students. The off-line classes reached the maximum grading score in all the evaluation topics, while on-line classes reached an average score over 1,78 (in a 0-2 scale). All the involved teachers answered that they would like to carry out virtual classes using this system; 75% declared that they would like to use it for their work, and 25% declared that they could use it. Among the students, only 2,33% dislike the virtual classes using the VirtuAula while 4,65% informed that they would not like to have such kind of classes in their courses. Looking for the reas ons for the lower performance of on-line classes in this survey, the major cause was the difficulty to carry out such event on the present commercial Internet system due to its low performance during very heavy data transfer. Among the conclusions presented here, it can be depicted that the system is an interesting alternative tool for public schools in Brazil due to its originality (no similar software), low cost and user free possibility
Doutorado
Engenharia Biomedica
Doutor em Engenharia Elétrica
GUDIVADA, RANGA CHANDRA. "DISCOVERY AND PRIORITIZATION OF BIOLOGICAL ENTITIES UNDERLYING COMPLEX DISORDERS BY PHENOME-GENOME NETWORK INTEGRATION." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1195161740.
Full textRios, Anthony. "Deep Neural Networks for Multi-Label Text Classification: Application to Coding Electronic Medical Records." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/71.
Full textCabral, Braulio J. "Exploring Factors Influencing Information Technology Portfolio Selection Process in Government-Funded Bioinformatics Projects." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2957.
Full textVlachos, Andreas. "Semi-supervised learning for biomedical information extraction." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608805.
Full textNelson, Justin. "The Development of a Human Operator Informatic Model (HOIM) incorporating the Effects of Non-Invasive Brain Stimulation on Information Processing while performing Multi-Attribute Task Battery (MATB)." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1461066834.
Full textJilkine, Petr. "Application of information fusion methods to biomedical data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23615.pdf.
Full textGuo, Yufan. "Automatic analysis of information structure in biomedical literature." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648829.
Full textThomas, Philippe. "Robust relationship extraction in the biomedical domain." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17372.
Full textFor several centuries, a great wealth of human knowledge has been communicated by natural language, often recorded in written documents. In the life sciences, an exponential increase of scientific articles has been observed, hindering the effective and fast reconciliation of previous finding into current research projects. This thesis studies the automatic extraction of relationships between named entities. Within this topic, it focuses on increasing robustness for relationship extraction. First, we evaluate the use of ensemble methods to improve performance using data provided by the drug-drug-interaction challenge 2013. Ensemble methods aggregate several classifiers into one model, increasing robustness by reducing the risk of choosing an inappropriate single classifier. Second, this work discusses the problem of applying relationship extraction to documents with unknown text characteristics. Robustness of a text mining component is assessed by cross-learning, where a model is evaluated on a corpus different from the training corpus. We apply self-training, a semi-supervised learning technique, in order to increase cross-learning performance and show that it is more robust in comparison to a classifier trained on manually annotated text only. Third, we investigate the use of distant supervision to overcome the need of manually annotated training instances. Corpora derived by distant supervision are inherently noisy, thus benefiting from robust relationship extraction methods. We compare two different methods and show that both approaches achieve similar performance as fully supervised classifiers, evaluated in the cross-learning scenario. To facilitate the usage of information extraction results, including those developed within this thesis, we develop the semantic search engine GeneView. We discuss computational requirements to build this resource and present some applications utilizing the data extracted by different text-mining components.
Sahoo, Satya Sanket. "Semantic Provenance: Modeling, Querying, and Application in Scientific Discovery." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1282847715.
Full textKoroleva, Anna. "Assisted authoring for avoiding inadequate claims in scientific reporting." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS021.
Full textIn this thesis, we report on our work on developing Natural Language Processing (NLP) algorithms to aid readers and authors of scientific (biomedical) articles in detecting spin (distorted presentation of research results). Our algorithm focuses on spin in abstracts of articles reporting Randomized Controlled Trials (RCTs). We studied the phenomenon of spin from the linguistic point of view to create a description of its textual features. We annotated a set of corpora for the key tasks of our spin detection pipeline: extraction of declared (primary) and reported outcomes, assessment of semantic similarity of pairs of trial outcomes, and extraction of relations between reported outcomes and their statistical significance levels. Besides, we anno-tated two smaller corpora for identification of statements of similarity of treatments and of within-group comparisons. We developed and tested a number of rule-based and machine learning algorithmsforthe key tasksof spindetection(outcome extraction,outcome similarity assessment, and outcome-significance relation extraction). The best performance was shown by a deep learning approach that consists in fine-tuning deep pre-trained domain-specific language representations(BioBERT and SciBERT models) for our downstream tasks. This approach was implemented in our spin detection prototype system, called De-Spin, released as open source code. Our prototype includes some other important algorithms, such as text structure analysis (identification of the abstract of an article, identification of sections within the abstract), detection of statements of similarity of treatments and of within-group comparisons, extraction of data from trial registries. Identification of abstract sections is performed with a deep learning approach using the fine-tuned BioBERT model, while other tasks are performed using a rule-based approach. Our prototype system includes a simple annotation and visualization interface
Johannsson, Dagur Valberg. "Biomedical Information Retrieval based on Document-Level Term Boosting." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8981.
Full textThere are several problems regarding information retrieval on biomedical information. The common methods for information retrieval tend to fall short when searching in this domain. With the ever increasing amount of information available, researchers have widely agreed on that means to precisely retrieve needed information is vital to use all available knowledge. We have in an effort to increase the precision of retrieval within biomedical information created an approach to give all terms in a document a context weight based on the contexts domain specific data. We have created a means of including our context weights in document ranking, by combining the weights with existing ranking models. Combining context weights with existing models has given us document-level term boosting, where the context of the queried terms within a document will positively or negatively affect the documents ranking score. We have tested out our approach by implementing a full search engine prototype and evaluatied it on a document collection within biomedical domain. Our work shows that this type of score boosting has little effect on overall retrieval precision. We conclude that the approach we have created, as implemented in our prototype, not to necessarily be good means of increasing precision in biomedical retrieval systems.
Canevet, Catherine. "Automating the gathering of relevant information from biomedical text." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3849.
Full textNunes, Tiago Santos Barata. "A sentence-based information retrieval system for biomedical corpora." Master's thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12698.
Full textO desenvolvimento de novos métodos experimentais e tecnologias de alto rendimento no campo biomédico despoletou um crescimento acelerado do volume de publicações científicas na área. Inúmeros repositórios estruturados para dados biológicos foram criados ao longo das últimas décadas, no entanto, os utilizadores estão cada vez mais a recorrer a sistemas de recuperação de informação, ou motores de busca, em detrimento dos primeiros. Motores de pesquisa apresentam-se mais fáceis de usar devido à sua flexibilidade e capacidade de interpretar os requisitos dos utilizadores, tipicamente expressos na forma de pesquisas compostas por algumas palavras. Sistemas de pesquisa tradicionais devolvem documentos completos, que geralmente requerem um grande esforço de leitura para encontrar a informação procurada, encontrando-se esta, em grande parte dos casos, descrita num trecho de texto composto por poucas frases. Além disso, estes sistemas falham frequentemente na tentativa de encontrar a informação pretendida porque, apesar de a pesquisa efectuada estar normalmente alinhada semanticamente com a linguagem usada nos documentos procurados, os termos usados são lexicalmente diferentes. Esta dissertação foca-se no desenvolvimento de técnicas de recuperação de informação baseadas em frases que, para uma dada pesquisa de um utilizador, permitam encontrar frases relevantes da literatura científica que respondam aos requisitos do utilizador. O trabalho desenvolvido apresenta-se em duas partes. Primeiro foi realizado trabalho de investigação exploratória para identificação de características de frases informativas em textos biomédicos. Para este propósito foi usado um método de aprendizagem automática. De seguida foi desenvolvido um sistema de pesquisa de frases informativas. Este sistema suporta pesquisas de texto livre e baseadas em conceitos, os resultados de pesquisa apresentam-se enriquecidos com anotações de conceitos relevantes e podem ser ordenados segundo várias estratégias de classificação.
Modern advances of experimental methods and high-throughput technology in the biomedical domain are causing a fast-paced, rising growth of the volume of published scientific literature in the field. While a myriad of structured data repositories for biological knowledge have been sprouting over the last decades, Information Retrieval (IR) systems are increasingly replacing them. IR systems are easier to use due to their flexibility and ability to interpret user needs in the form of queries, typically formed by a few words. Traditional document retrieval systems return entire documents, which may require a lot of subsequent reading to find the specific information sought, frequently contained in a small passage of only a few sentences. Additionally, IR often fails to find what is wanted because the words used in the query are lexically different, despite semantically aligned, from the words used in relevant sources. This thesis focuses on the development of sentence-based information retrieval approaches that, for a given user query, allow seeking relevant sentences from scientific literature that answer the user information need. The presented work is two-fold. First, exploratory research experiments were conducted for the identification of features of informative sentences from biomedical texts. A supervised machine learning method was used for this purpose. Second, an information retrieval system for informative sentences was developed. It supports free text and concept-based queries, search results are enriched with relevant concept annotations and sentences can be ranked using multiple configurable strategies.
Lee, Lawrence Chet-Lun. "Text mining of point mutation information from biomedical literature." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3339194.
Full textLeroy, Gondy, Hsinchun Chen, Jesse D. Martinez, Shauna Eggers, Ryan R. Falsey, Kerri L. Kislin, Zan Huang, et al. "Genescene: Biomedical Text And Data Mining." Wiley Periodicals, Inc, 2005. http://hdl.handle.net/10150/105791.
Full textTo access the content of digital texts efficiently, it is necessary to provide more sophisticated access than keyword based searching. Genescene provides biomedical researchers with research findings and background relations automatically extracted from text and experimental data. These provide a more detailed overview of the information available. The extracted relations were evaluated by qualified researchers and are precise. A qualitative ongoing evaluation of the current online interface indicates that this method to search the literature is more useful and efficient than keyword based searching.
Hakenberg, Jörg. "Mining relations from the biomedical literature." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2010. http://dx.doi.org/10.18452/16073.
Full textText mining deals with the automated annotation of texts and the extraction of facts from textual data for subsequent analysis. Such texts range from short articles and abstracts to large documents, for instance web pages and scientific articles, but also include textual descriptions in otherwise structured databases. This thesis focuses on two key problems in biomedical text mining: relationship extraction from biomedical abstracts ---in particular, protein--protein interactions---, and a pre-requisite step, named entity recognition ---again focusing on proteins. This thesis presents goals, challenges, and typical approaches for each of the main building blocks in biomedical text mining. We present out own approaches for named entity recognition of proteins and relationship extraction of protein-protein interactions. For the first, we describe two methods, one set up as a classification task, the other based on dictionary-matching. For relationship extraction, we develop a methodology to automatically annotate large amounts of unlabeled data for relations, and make use of such annotations in a pattern matching strategy. This strategy first extracts similarities between sentences that describe relations, storing them as consensus patterns. We develop a sentence alignment approach that introduces multi-layer alignment, making use of multiple annotations per word. For the task of extracting protein-protein interactions, empirical results show that our methodology performs comparable to existing approaches that require a large amount of human intervention, either for annotation of data or creation of models.
Reeve, Lawrence H. Han Hyoil. "Semantic annotation and summarization of biomedical text /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1779.
Full textCandito, Antonio. "Integrazione informatica dei sistemi di medicina nucleare nel sistema informativo ospedaliero." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/4055/.
Full textYoo, Illhoi Hu Xiaohua. "Semantic text mining and its application in biomedical domain /." Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860%20/899.
Full textJimeno, Yepes Antonio José. "Ontology refinement for improved information retrieval in the biomedical domain." Doctoral thesis, Universitat Jaume I, 2009. http://hdl.handle.net/10803/384552.
Full textYu, Zhiguo. "Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/33.
Full textGrother, Ethan Mark. "Mobile device reference apps to monitor and display biomedical information." Thesis, Kansas State University, 2017. http://hdl.handle.net/2097/35488.
Full textDepartment of Electrical and Computer Engineering
Steven Warren
Smart phones and other mobile technologies can be used to collect and display physiological information from subjects in various environments – clinical or otherwise. This thesis highlights software app reference designs that allow a smart phone to receive, process, and display biomedical data. Two research projects, described below and in the thesis body, guided this development. Android Studio was chosen to develop the phone application, after exploring multiple development options (including a cross-platform development tool), because it reduced the development time and the number of required programming languages. The first project, supported by the Kansas State University Johnson Cancer Research Center (JCRC), required a mobile device software application that could determine the hemoglobin level of a blood sample based on the most prevalent color in an image acquired by a phone camera, where the image is the result of a chemical reaction between the blood sample and a reagent. To calculate the hemoglobin level, a circular region of interest is identified from within the original image using image processing, and color information from that region of interest is input to a model that provides the hemoglobin level. The algorithm to identify the region of interest is promising but needs additional development to work properly at different image resolutions. The associated model also needs additional work, as described in the text. The second project, in collaboration with Heartspring, Wichita, KS, required a mobile application to display information from a sensor bed used to gather nighttime physiological data from severely disabled autistic children. In this case, a local data server broadcasts these data over a wireless network. The phone application gathers information about the bed over this wireless network and displays these data in user-friendly manner. This approach works well when sending basic information but experiences challenges when sending images. Future work for both project applications includes error handling and user interface improvements. For the JCRC application, a better way to account for image resolution changes needs to be developed, in addition to a means to determine whether the region of interest is valid. For the Heartspring application, future work should include improving image transmissions.
Tan, He. "Aligning and Merging Biomedical Ontologies." Licentiate thesis, Linköping : Univ, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6201.
Full text