Siga este link para ver outros tipos de publicações sobre o tema: Annotations (Provenance).

Artigos de revistas sobre o tema "Annotations (Provenance)"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Annotations (Provenance)".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Wang, Liwei, Henning Koehler, Ke Deng, Xiaofang Zhou e Shazia Sadiq. "Flexible Provenance Tracing". International Journal of Systems and Service-Oriented Engineering 2, n.º 2 (abril de 2011): 1–20. http://dx.doi.org/10.4018/jssoe.2011040101.

Texto completo da fonte
Resumo:
The description of the origins of a piece of data and the transformations by which it arrived in a database is termed the data provenance. The importance of data provenance has already been widely recognized in database community. The two major approaches to representing provenance information use annotations and inversion. While annotation is metadata pre-computed to include the derivation history of a data product, the inversion method finds the source data based on the situation that some derivation process can be inverted. Annotations are flexible to represent diverse provenance metadata but the complete provenance data may outsize data itself. Inversion method is concise by using a single inverse query or function but the provenance needs to be computed on-the-fly. This paper proposes a new provenance representation which is a hybrid of annotation and inversion methods in order to achieve combined advantage. This representation is adaptive to the storage constraint and the response time requirement of provenance inversion on-the-fly.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bourgaux, Camille, e Ana Ozaki. "Querying Attributed DL-Lite Ontologies Using Provenance Semirings". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 2719–26. http://dx.doi.org/10.1609/aaai.v33i01.33012719.

Texto completo da fonte
Resumo:
Attributed description logic is a recently proposed formalism, targeted for graph-based representation formats, which enriches description logic concepts and roles with finite sets of attribute-value pairs, called annotations. One of the most important uses of annotations is to record provenance information. In this work, we first investigate the complexity of satisfiability and query answering for attributed DL-LiteR ontologies. We then propose a new semantics, based on provenance semirings, for integrating provenance information with query answering. Finally, we establish complexity results for satisfiability and query answering under this semantics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Haochen Zou, Haochen Zou, Dejian Wang Haochen Zou e Yang Xiao Dejian Wang. "Annolog: A Query Processing Framework for Modelling and Reasoning with Annotated Data". 電腦學刊 34, n.º 2 (abril de 2023): 081–97. http://dx.doi.org/10.53106/199115992023043402007.

Texto completo da fonte
Resumo:
<p>Data annotation is the categorization and labelling of data for applications, such as machine learning, artificial intelligence, and data integration. The categorization and labelling are done to achieve a specific use case in relation to solving problems. Existing data annotation systems and modules face imperfections such as knowledge and annotation not being formally integrated, narrow application range, and difficulty to apply on existing database management applications. To analyze and process annotated data, obtain the relationship between different annotations, and capture metainformation in data provenance and probabilistic databases, in this paper, we design a back-end query processing framework as a supplementary interface for the database management system to extend operation to datasets and boost efficiency. The framework utilizes Java language and the MVC model for development to achieve lightweight, cross-platform, and high adaptability identities. The contribution of this paper is mainly reflected in two aspects. The first contribution is to implement query processing, provenance semiring, and semiring homomorphism over annotated data. The second contribution is to combine query processing and provenance with SQL statements in order to enable the database manager to invoke operations to annotation.</p> <p>&nbsp;</p>
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Nadendla, Suvarna, Rebecca Jackson, James Munro, Federica Quaglia, Bálint Mészáros, Dustin Olley, Elizabeth T. Hobbs et al. "ECO: the Evidence and Conclusion Ontology, an update for 2022". Nucleic Acids Research 50, n.º D1 (19 de novembro de 2021): D1515—D1521. http://dx.doi.org/10.1093/nar/gkab1025.

Texto completo da fonte
Resumo:
Abstract The Evidence and Conclusion Ontology (ECO) is a community resource that provides an ontology of terms used to capture the type of evidence that supports biomedical annotations and assertions. Consistent capture of evidence information with ECO allows tracking of annotation provenance, establishment of quality control measures, and evidence-based data mining. ECO is in use by dozens of data repositories and resources with both specific and general areas of focus. ECO is continually being expanded and enhanced in response to user requests as well as our aim to adhere to community best-practices for ontology development. The ECO support team engages in multiple collaborations with other ontologies and annotating groups. Here we report on recent updates to the ECO ontology itself as well as associated resources that are available through this project. ECO project products are freely available for download from the project website (https://evidenceontology.org/) and GitHub (https://github.com/evidenceontology/evidenceontology). ECO is released into the public domain under a CC0 1.0 Universal license.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Martin, Chris J., Mohammed H. Haji, Peter K. Jimack, Michael J. Pilling e Peter M. Dew. "A user-orientated approach to provenance capture and representation for in silico experiments, explored within the atmospheric chemistry community". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, n.º 1898 (13 de julho de 2009): 2753–70. http://dx.doi.org/10.1098/rsta.2009.0044.

Texto completo da fonte
Resumo:
We present a novel user-orientated approach to provenance capture and representation for in silico experiments, contrasted against the more systems-orientated approaches that have been typical within the e-Science domain. In our approach, we seek to capture the scientist's reasoning in the form of annotations as an experiment evolves, while using the scientist's terminology in the representation of process provenance. Our user-orientated approach is applied in a case study within the atmospheric chemistry domain: we consider the design, development and evaluation of an electronic laboratory notebook, a provenance capture and storage tool, for iterative model development.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Pei, Jisheng, e Xiaojun Ye. "Information-Balance-Aware Approximated Summarization of Data Provenance". Scientific Programming 2017 (12 de setembro de 2017): 1–11. http://dx.doi.org/10.1155/2017/4504589.

Texto completo da fonte
Resumo:
Extracting useful knowledge from data provenance information has been challenging because provenance information is often overwhelmingly enormous for users to understand. Recently, it has been proposed that we may summarize data provenance items by grouping semantically related provenance annotations so as to achieve concise provenance representation. Users may provide their intended use of the provenance data in terms of provisioning, and the quality of provenance summarization could be optimized for smaller size and closer distance between the provisioning results derived from the summarization and those from the original provenance. However, apart from the intended provisioning use, we notice that more dedicated and diverse user requirements can be expressed and considered in the summarization process by assigning importance weights to provenance elements. Moreover, we introduce information balance index (IBI), an entropy based measurement, to dynamically evaluate the amount of information retained by the summary to check how it suits user requirements. An alternative provenance summarization algorithm that supports manipulation of information balance is presented. Case studies and experiments show that, in summarization process, information balance can be effectively steered towards user-defined goals and requirement-driven variants of the provenance summarizations can be achieved to support a series of interesting scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Stork, Lise, Andreas Weber, Eulàlia Miracle e Katherine Wolstencroft. "A Workflow for the Semantic Annotation of Field Books and Specimen Labels". Biodiversity Information Science and Standards 2 (13 de junho de 2018): e25839. http://dx.doi.org/10.3897/biss.2.25839.

Texto completo da fonte
Resumo:
Geographical and taxonomical referencing of specimens and documented species observations from within and across natural history collections is vital for ongoing species research. However, much of the historical data such as field books, diaries and specimens, are challenging to work with. They are computationally inaccessable, refer to historical place names and taxonomies, and are written in a variety of languages. In order to address these challenges and elucidate historical species observation data, we developed a workflow to (i) crowd-source semantic annotations from handwritten species observations, (ii) transform them into RDF (Resource Description Framework) and (iii) store and link them in a knowledge base. Instead of full-transcription we directly annotate digital field books scans with key concepts that are based on Darwin Core standards. Our workflow stresses the importance of verbatim annotation. The interpretation of the historical content, such a resolving a historical taxon to a current one, can be done by individual researchers after the content is published as linked open data. Through the storage of annotion provenance, who created the annotation and when, we allow multiple interpretations of the content to exist in parallel, stimulating scientific discourse. The semantic annotation process is supported by a web application, the Semantic Field Book (SFB)-Annotator, driven by an application ontology. The ontology formally describes the content and meta-data required to semantically annotate species observations. It is based on the Darwin Core standard (DwC), Uberon and the Geonames ontology. The provenance of annotations is stored using the Web Annotation Data Model. Adhering to the principles of FAIR (Findable, Accessible, Interoperable &amp; Reusable) and Linked Open Data, the content of the specimen collections can be interpreted homogeneously and aggregated across datasets. This work is part of the Making Sense project: makingsenseproject.org. The project aims to disclose the content of a natural history collection: a 17,000 page account of the exploration of the Indonesian Archipelago between 1820 and 1850 (Natuurkundige Commissie voor Nederlands-Indie) With a knowledge base, researchers are given easy access to the primary sources of natural history collections. For their research, they can aggregate species observations, construct rich queries to browse through the data and add their own interpretations regarding the meaning of the historical content.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Bonatti, Piero A., Aidan Hogan, Axel Polleres e Luigi Sauro. "Robust and scalable Linked Data reasoning incorporating provenance and trust annotations". Journal of Web Semantics 9, n.º 2 (julho de 2011): 165–201. http://dx.doi.org/10.1016/j.websem.2011.06.003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hernández, Daniel, Luis Galárraga e Katja Hose. "Computing how-provenance for SPARQL queries via query rewriting". Proceedings of the VLDB Endowment 14, n.º 13 (setembro de 2021): 3389–401. http://dx.doi.org/10.14778/3484224.3484235.

Texto completo da fonte
Resumo:
Over the past few years, we have witnessed the emergence of large knowledge graphs built by extracting and combining information from multiple sources. This has propelled many advances in query processing over knowledge graphs, however the aspect of providing provenance explanations for query results has so far been mostly neglected. We therefore propose a novel method, SPARQLprov, based on query rewriting, to compute how-provenance polynomials for SPARQL queries over knowledge graphs. Contrary to existing works, SPARQLprov is system-agnostic and can be applied to standard and already deployed SPARQL engines without the need of customized extensions. We rely on spm-semirings to compute polynomial annotations that respect the property of commutation with homomorphisms on monotonic and non-monotonic SPARQL queries without aggregate functions. Our evaluation on real and synthetic data shows that SPARQLprov over standard engines incurs an acceptable runtime overhead w.r.t. the original query, competing with state-of-the-art solutions for how-provenance computation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

MORAES, PEDRO LUÍS RODRIGUES DE. "Miscellaneous notes on specimens collected in Surat, Johanna Island and Cácota de Suratá, with emphasis on the Linnaean Herbarium (LINN)". Phytotaxa 603, n.º 1 (17 de julho de 2023): 43–59. http://dx.doi.org/10.11646/phytotaxa.603.1.3.

Texto completo da fonte
Resumo:
The aim of this paper was to investigate whether specimens in the Linnaean Herbarium (LINN), with Linnaean annotations of provenance as from Suratte/Suratt or Johanna Island, would be original material for their respective species names. As Linnaeus did not explicitly cite a specimen in the protolog of those names, neither the collector for most of them, the possible origin of each individual specimen was checked and searched from independent evidence, such as an explicit annotation, or a link to a dated list or letter among Linnaeus’s surviving correspondence, or from Linnaeus’s Paper Slips. Regarding the typification of those names, published choices of type have been found, but on a careful evaluation some were found to be ineffective or supersedable. This was the case for Cenchrus muricatus L. and Utricularia stellaris L.f., whose lectotypifications are proposed here.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Zhang, Qian, Yang Cao, Qiwen Wang, Duc Vu, Priyaa Thavasimani, Timothy McPhillips, Paolo Missier et al. "Revealing the Detailed Lineage of Script Outputs Using Hybrid Provenance". International Journal of Digital Curation 12, n.º 2 (13 de agosto de 2018): 390–408. http://dx.doi.org/10.2218/ijdc.v12i2.585.

Texto completo da fonte
Resumo:
We illustrate how combining retrospective and prospectiveprovenance can yield scientifically meaningful hybrid provenancerepresentations of the computational histories of data produced during a script run. We use scripts from multiple disciplines (astrophysics, climate science, biodiversity data curation, and social network analysis), implemented in Python, R, and MATLAB, to highlight the usefulness of diverse forms of retrospectiveprovenance when coupled with prospectiveprovenance. Users provide prospective provenance, i.e., the conceptual workflows latent in scripts, via simple YesWorkflow annotations, embedded as script comments. Runtime observables can be linked to prospective provenance via relational views and queries. These observables could be found hidden in filenames or folder structures, be recorded in log files, or they can be automatically captured using tools such as noWorkflow or the DataONE RunManagers. The YesWorkflow toolkit, example scripts, and demonstration code are available via an open source repository.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Kasif, Simon, e Richard J. Roberts. "We need to keep a reproducible trace of facts, predictions, and hypotheses from gene to function in the era of big data". PLOS Biology 18, n.º 11 (30 de novembro de 2020): e3000999. http://dx.doi.org/10.1371/journal.pbio.3000999.

Texto completo da fonte
Resumo:
How do we scale biological science to the demand of next generation biology and medicine to keep track of the facts, predictions, and hypotheses? These days, enormous amounts of DNA sequence and other omics data are generated. Since these data contain the blueprint for life, it is imperative that we interpret it accurately. The abundance of DNA is only one part of the challenge. Artificial Intelligence (AI) and network methods routinely build on large screens, single cell technologies, proteomics, and other modalities to infer or predict biological functions and phenotypes associated with proteins, pathways, and organisms. As a first step, how do we systematically trace the provenance of knowledge from experimental ground truth to gene function predictions and annotations? Here, we review the main challenges in tracking the evolution of biological knowledge and propose several specific solutions to provenance and computational tracing of evidence in functional linkage networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Willoughby, Cerys, e Jeremy G. Frey. "Documentation and Visualisation of Workflows for Effective Communication, Collaboration and Publication @ Source". International Journal of Digital Curation 12, n.º 1 (16 de setembro de 2017): 72–87. http://dx.doi.org/10.2218/ijdc.v12i1.532.

Texto completo da fonte
Resumo:
Workflows processing data from research activities and driving in silico experiments are becoming an increasingly important method for conducting scientific research. Workflows have the advantage that not only can they be automated and used to process data repeatedly, but they can also be reused – in part or whole – enabling them to be evolved for use in new experiments. A number of studies have investigated strategies for storing and sharing workflows for the benefit of reuse. These have revealed that simply storing workflows in repositories without additional context does not enable workflows to be successfully reused. These studies have investigated what additional resources are needed to facilitate users of workflows and in particular to add provenance traces and to make workflows and their resources machine-readable. These additions also include adding metadata for curation, annotations for comprehension, and including data sets to provide additional context to the workflow. Ultimately though, these mechanisms still rely on researchers having access to the software to view and run the workflows. We argue that there are situations where researchers may want to understand a workflow that goes beyond what provenance traces provide and without having to run the workflow directly; there are many situations in which it can be difficult or impossible to run the original workflow. To that end, we have investigated the creation of an interactive workflow visualization that captures the flow chart element of the workflow with additional context including annotations, descriptions, parameters, metadata and input, intermediate, and results data that can be added to the record of a workflow experiment to enhance both curation and add value to enable reuse. We have created interactive workflow visualisations for the popular workflow creation tool KNIME, which does not provide users with an in-built function to extract provenance information that can otherwise only be viewed through the tool itself. Making use of the strengths of KNIME for adding documentation and user-defined metadata we can extract and create a visualisation and curation package that encourages and enhances curation@source, facilitating effective communication, collaboration, and reuse of workflows.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Edgington, John. "Annotations in copies of Thomas Johnson's Mercurius botanicus (1634) and Mercurii botanici, pars altera (1641): authorship and provenance". Archives of Natural History 43, n.º 2 (outubro de 2016): 208–20. http://dx.doi.org/10.3366/anh.2016.0379.

Texto completo da fonte
Resumo:
By an analysis of extensive and detailed annotations in copies of Thomas Johnson's Mercurius botanicus (1634) and Mercurii botanici, pars altera (1641) held in the library of the Royal Botanic Gardens, Kew, the probable author is identified as William Bincks, an apprentice apothecary of Kingston-upon-Thames, Surrey. Through Elias Ashmole, a friend of Bincks' master Thomas Agar, a link is established with the probable original owner, John Watlington of Reading, botanist and apothecary, and colleague of Thomas Johnson. The route by which the book ended up in the hands of Thomas Wilson, a journeyman copyist of Leeds, is suggested. Plants growing near Kingston-upon-Thames in the late seventeenth century, recorded in manuscript, are noted, many being first records for the county of Surrey.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Kalczyńska, Maria, Agnieszka Łakomy-Chłosta e Milena J. Jędrzejewska. "Materiały źródłowe do katalogu wydawnictw migracyjnych, opracowane na podstawie kolekcji Stowarzyszenia Ochrony Poloników Niemieckich w Opolu. Cz. 2 Polonika emigracyjne". Z Badań nad Książką i Księgozbiorami Historycznymi 11 (29 de dezembro de 2017): 429–51. http://dx.doi.org/10.33077/uw.25448730.zbkh.2017.47.

Texto completo da fonte
Resumo:
This catalog contains 117 bibliographic entries elaborated personally by the authors. The unit includes: name and surname of the author, title, designation of issue, place of publication, publisher’s name, release date and information about the physical characteristics of publication. The data acquired from outside the book are derived from bibliographic sources and they are placed in square brackets. Descriptions are supplemented by annotations that indicate the number of publications in the collection of the „Stowarzyszenie Ochrony Poloników Niemieckich” (The Association of German Polonica Protection) from Opole, provenance signs occurring on individual copies of selected examples ‒ the content of the payload. The items are mostly arranged alphabetically by surname of the first author, and the anonymous works by title. Description of units are given according to its original spelling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Abelló, Alberto, Jérôme Darmont, Lorena Etcheverry, Matteo Golfarelli, Jose-Norberto Mazón, Felix Naumann, Torben Pedersen et al. "Fusion Cubes". International Journal of Data Warehousing and Mining 9, n.º 2 (abril de 2013): 66–88. http://dx.doi.org/10.4018/jdwm.2013040104.

Texto completo da fonte
Resumo:
Self-service business intelligence is about enabling non-expert users to make well-informed decisions by enriching the decision process with situational data, i.e., data that have a narrow focus on a specific business problem and, typically, a short lifespan for a small group of users. Often, these data are not owned and controlled by the decision maker; their search, extraction, integration, and storage for reuse or sharing should be accomplished by decision makers without any intervention by designers or programmers. The goal of this paper is to present the framework we envision to support self-service business intelligence and the related research challenges; the underlying core idea is the notion of fusion cubes, i.e., multidimensional cubes that can be dynamically extended both in their schema and their instances, and in which situational data and metadata are associated with quality and provenance annotations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

SCHERP, ANSGAR, DANIEL EIßING e CARSTEN SAATHOFF. "A METHOD FOR INTEGRATING MULTIMEDIA METADATA STANDARDS AND METADATA FORMATS WITH THE MULTIMEDIA METADATA ONTOLOGY". International Journal of Semantic Computing 06, n.º 01 (março de 2012): 25–49. http://dx.doi.org/10.1142/s1793351x12400028.

Texto completo da fonte
Resumo:
The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Skevakis, Giannis, Chrisa Tsinaraki, Ioanna Trochatou e Stavros Christodoulakis. "A crowdsourcing framework for the management of mobile multimedia nature observations". International Journal of Pervasive Computing and Communications 10, n.º 3 (26 de agosto de 2014): 216–38. http://dx.doi.org/10.1108/ijpcc-06-2014-0038.

Texto completo da fonte
Resumo:
Purpose – This paper aims to describe MoM-NOCS, a Framework and a System that support communities with common interests in nature to capture and share multimedia observations of nature objects or events using mobile devices. Design/methodology/approach – The observations are automatically associated with contextual metadata that allow them to be visualized on top of 2D or 3D maps. The observations are managed by a multimedia management system, and annotated by the same and/or other users with common interests. Annotations made by the crowd support the knowledge distillation of the data and data provenance processes in the system. Findings – MoM-NOCS is complementary and interoperable with systems that are managed by natural history museums like MMAT (Makris et al., 2013) and biodiversity metadata management systems like BIOCASE (BioCASE) and GBIF (GBIF) so that they can link to interesting observations in the system, and the statistics of the observations that they manage can be visualized by the software. Originality/value – The Framework offers rich functionality for visualizing the observations made by the crowd as function of time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Yue, Zongliang, Christopher D. Willey, Anita B. Hjelmeland e Jake Y. Chen. "BEERE: a web server for biomedical entity expansion, ranking and explorations". Nucleic Acids Research 47, W1 (22 de maio de 2019): W578—W586. http://dx.doi.org/10.1093/nar/gkz428.

Texto completo da fonte
Resumo:
Abstract BEERE (Biomedical Entity Expansion, Ranking and Explorations) is a new web-based data analysis tool to help biomedical researchers characterize any input list of genes/proteins, biomedical terms or their combinations, i.e. ‘biomedical entities’, in the context of existing literature. Specifically, BEERE first aims to help users examine the credibility of known entity-to-entity associative or semantic relationships supported by database or literature references from the user input of a gene/term list. Then, it will help users uncover the relative importance of each entity—a gene or a term—within the user input by computing the ranking scores of all entities. At last, it will help users hypothesize new gene functions or genotype–phenotype associations by an interactive visual interface of constructed global entity relationship network. The output from BEERE includes: a list of the original entities matched with known relationships in databases; any expanded entities that may be generated from the analysis; the ranks and ranking scores reported with statistical significance for each entity; and an interactive graphical display of the gene or term network within data provenance annotations that link to external data sources. The web server is free and open to all users with no login requirement and can be accessed at http://discovery.informatics.uab.edu/beere/.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Gadhave, Kiran, Jochen Görtler, Zach Cutler, Carolina Nobre, Oliver Deussen, Miriah Meyer, Jeff M. Phillips e Alexander Lex. "Predicting intent behind selections in scatterplot visualizations". Information Visualization 20, n.º 4 (18 de agosto de 2021): 207–28. http://dx.doi.org/10.1177/14738716211038604.

Texto completo da fonte
Resumo:
Predicting and capturing an analyst’s intent behind a selection in a data visualization is valuable in two scenarios: First, a successful prediction of a pattern an analyst intended to select can be used to auto-complete a partial selection which, in turn, can improve the correctness of the selection. Second, knowing the intent behind a selection can be used to improve recall and reproducibility. In this paper, we introduce methods to infer analyst’s intents behind selections in data visualizations, such as scatterplots. We describe intents based on patterns in the data, and identify algorithms that can capture these patterns. Upon an interactive selection, we compare the selected items with the results of a large set of computed patterns, and use various ranking approaches to identify the best pattern for an analyst’s selection. We store annotations and the metadata to reconstruct a selection, such as the type of algorithm and its parameterization, in a provenance graph. We present a prototype system that implements these methods for tabular data and scatterplots. Analysts can select a prediction to auto-complete partial selections and to seamlessly log their intents. We discuss implications of our approach for reproducibility and reuse of analysis workflows. We evaluate our approach in a crowd-sourced study, where we show that auto-completing selection improves accuracy, and that we can accurately capture pattern-based intent.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Hobern, Donald, Andrea Hahn e Tim Robertson. "Options to streamline and enrich biodiversity data aggregation". Biodiversity Information Science and Standards 2 (21 de maio de 2018): e26808. http://dx.doi.org/10.3897/biss.2.26808.

Texto completo da fonte
Resumo:
The success of Darwin Core and ABCD Schema as flexible standards for sharing specimen data and species occurrence records has enabled GBIF to aggregate around one billion data records. At the same time, other thematic, national or regional aggregators have developed a wide range of other data indexes and portals, many of which enrich the data by interpreting and normalising elements not currently handled by GBIF or by linking other data from geospatial layers, trait databases, etc. Unfortunately, although each of these aggregators has specific strengths and supports particular audiences, this diversification produces many weaknesses and deficiencies for data publishers and for data users, including: incomplete and inconsistent inclusion of relevant datasets; proliferation of record identifiers; inconsistent and bespoke workflows to interpret and standardise data; absence of any shared basis for linked open data and annotations; divergent data formats and APIs; lack of clarity around provenance and impact; etc. The time is ripe for the global community to review these processes. From a technical standpoint, it would be feasible to develop a shared, integrated pipeline which harvested, validated and normalised all relevant biodiversity data records on behalf of all stakeholders. Such a system could build on TDWG expertise to standardise data checks and all stages in data transformation. It could incorporate a modular structure that allowed thematic, national or regional networks to generate additional data elements appropriate to the needs of their users, but for all of these elements to remain part of a single record with a single identifier, facilitating a much more rigorous approach to linked open data. Most of the other issues we currently face around fitness-for-use, predictability and repeatability, transparency and provenance could be supported much more readily under such a model. The key challenges that would need to be overcome would be around social factors, particularly to deliver a flexible and appropriate governance model and to allow research networks, national agencies, etc. to embed modular components within a shared workflow. Given the urgent need to improve data management to support Essential Biodiversity Variables and to deliver an effective global virtual natural history collection, we should review these challenges and seek to establish a data management and aggregation architecture that will support us for the coming decades.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Kogay, Roman, Taylor B. Neely, Daniel P. Birnbaum, Camille R. Hankel, Migun Shakya e Olga Zhaxybayeva. "Machine-Learning Classification Suggests That Many Alphaproteobacterial Prophages May Instead Be Gene Transfer Agents". Genome Biology and Evolution 11, n.º 10 (27 de setembro de 2019): 2941–53. http://dx.doi.org/10.1093/gbe/evz206.

Texto completo da fonte
Resumo:
Abstract Many of the sequenced bacterial and archaeal genomes encode regions of viral provenance. Yet, not all of these regions encode bona fide viruses. Gene transfer agents (GTAs) are thought to be former viruses that are now maintained in genomes of some bacteria and archaea and are hypothesized to enable exchange of DNA within bacterial populations. In Alphaproteobacteria, genes homologous to the “head–tail” gene cluster that encodes structural components of the Rhodobacter capsulatus GTA (RcGTA) are found in many taxa, even if they are only distantly related to Rhodobacter capsulatus. Yet, in most genomes available in GenBank RcGTA-like genes have annotations of typical viral proteins, and therefore are not easily distinguished from their viral homologs without additional analyses. Here, we report a “support vector machine” classifier that quickly and accurately distinguishes RcGTA-like genes from their viral homologs by capturing the differences in the amino acid composition of the encoded proteins. Our open-source classifier is implemented in Python and can be used to scan homologs of the RcGTA genes in newly sequenced genomes. The classifier can also be trained to identify other types of GTAs, or even to detect other elements of viral ancestry. Using the classifier trained on a manually curated set of homologous viruses and GTAs, we detected RcGTA-like “head–tail” gene clusters in 57.5% of the 1,423 examined alphaproteobacterial genomes. We also demonstrated that more than half of the in silico prophage predictions are instead likely to be GTAs, suggesting that in many alphaproteobacterial genomes the RcGTA-like elements remain unrecognized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Márton, Szabolcs. "Tapping Into Unknown Musical Areas Analysis of a Medieval Bohemian Musical Manuscript". Studia Universitatis Babeş-Bolyai Musica 68, Sp.Iss. 2 (10 de agosto de 2023): 233–53. http://dx.doi.org/10.24193/subbmusica.2023.spiss2.15.

Texto completo da fonte
Resumo:
"This research presents a medieval musical manuscript that has not yet been analyzed in detail. Catalogued under the name of Graduale Latino-Bohemicum, and currently held in the Batthyaneum Library of Alba Iulia, it has many peculiarities in comparison with other similar codices from the Transylvanian area, hence also compared with other Czech manuscripts. We offer analysis around the date of its creation, then debate different naming options. To create the proper context of understanding for the analysis, we present a brief historical background of the time and place in question, that is the turbulent 15th and 16th century of Europe, with special focus on Transylvania. We continue with the physical aspects of the manuscript that guide us through the colorful world of medieval codices. From a structural standpoint the work has two delimited parts. The bilingual manuscript starts with chants written in Czech and finishes with melodies in Latin. The existence of the Czech language, as well as many other clues govern us to set up hypotheses regarding its provenance. During the content analysis we dedicate a subchapter to the later page inserts that contain additional notes for the chants, wherefrom we can further conclude theories about the usage of the codex, authors of the later annotations, and so forth. We offer a more in-depth analysis of the musical notation where aspects like rhythm, staff, neumes used and special solutions are shown. Finally, we conclude all major, raised questions related to the name, origin, and genre. Keywords: Graduale Latino-Bohemicum, musical manuscript, codex, medieval, paleography, Gradual, Cancional, Antifonal, Hussite, Czech, Latin, Gregorian, unison, polyphony."
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Groth, D. P., e K. Streefkerk. "Provenance and Annotation for Visual Exploration Systems". IEEE Transactions on Visualization and Computer Graphics 12, n.º 6 (novembro de 2006): 1500–1510. http://dx.doi.org/10.1109/tvcg.2006.101.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Alper, Pinar, Khalid Belhajjame, Vasa Curcin e Carole Goble. "LabelFlow Framework for Annotating Workflow Provenance". Informatics 5, n.º 1 (23 de fevereiro de 2018): 11. http://dx.doi.org/10.3390/informatics5010011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Bose, Rajendra, Ian Foster e Luc Moreau. "Report on the International Provenance and Annotation Workshop". ACM SIGMOD Record 35, n.º 3 (setembro de 2006): 51–53. http://dx.doi.org/10.1145/1168092.1168102.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Livingston, Kevin M., Michael Bada, Lawrence E. Hunter e Karin Verspoor. "Representing annotation compositionality and provenance for the Semantic Web". Journal of Biomedical Semantics 4, n.º 1 (2013): 38. http://dx.doi.org/10.1186/2041-1480-4-38.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Mazur, O. P. "Book treasures: collections of the sector of rare books and manuscripts of the Scientific Library of National Pirogov Memorial Medical University, Vinnytsya". Reports of Vinnytsia National Medical University 26, n.º 1 (28 de março de 2022): 165–69. http://dx.doi.org/10.31393/reports-vnmedical-2022-26(1)-30.

Texto completo da fonte
Resumo:
Annotation. The article describes the book collections of doctors and scientists from the fund of the Scientific Library of National Pirogov Memorial Medical University, Vinnytsya and covered their provenance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Mazur, O. P. "Book treasures: collections of the sector of rare books and manuscripts of the Scientific Library of National Pirogov Memorial Medical University, Vinnytsya". Reports of Vinnytsia National Medical University 26, n.º 1 (28 de março de 2022): 165–69. http://dx.doi.org/10.31393/reports-vnmedical-2022-26(1)-30.

Texto completo da fonte
Resumo:
Annotation. The article describes the book collections of doctors and scientists from the fund of the Scientific Library of National Pirogov Memorial Medical University, Vinnytsya and covered their provenance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Spracklan-Holl, Hannah. "Noblewomen’s Devotional Song Practice in ‘Patiençe veinque tout’ (1647–1655), a German Manuscript from the Mid-seventeenth Century". Context, n.º 48 (31 de janeiro de 2023): 35–51. http://dx.doi.org/10.46580/cx78832.

Texto completo da fonte
Resumo:
"Devotional song in the German vernacular was a large repertory in the seventeenth century; as Robert Kendrick points out, the more than two thousand printed collections of these songs produced at this time attest to a relatively musically literate public that engaged with the repertoire. Most published devotional songs appeared in collections, usually consisting of a foreword or dedications, other poetry, and songs. The songs themselves often appeared without printed musical notation, indicating the use of contrafactum. Both men and women contributed to the German devotional song repertoire; however, there is a notable number of original song texts written by women. The 1703 publication Glauben-schallende und Himmel-steigende Herzens-Music, for example, contains 1,052 devotional songs, of which 211 have texts written by women. Women’s performance of devotional texts—whether by singing, recitation, or reading—was a practice that demonstrated their deep internalisation of the text itself whilst also providing a socially acceptable means of self-expression. This article focuses on a mid-seventeenth century manuscript songbook compiled by Duchess Sophie Elisabeth of Braunschweig-Lüneburg (1613–1676). It suggests that at least three songs in the manuscript have poetry and/or music written by noblewomen other than the duchess, including Juliane of Oldenburg-Delmenhorst (1615–1691) and Maria Magdalena of Waldeck-Wildungen (1606–1671). This suggestion is based on two notable features of Sophie Elisabeth’s meticulous recording practices in ‘Patiençe veinque tout’ that are missing from the songs in question. First, the authorship of these three songs is ambiguous, whereas the text and music for all other songs in the source are either made clear by Sophie Elisabeth’s own annotations accompanying each song, or are easily traceable. Second, none of the three songs include the duchess’s monogram, a date of composition, nor any other note stating her authorship (one or more of these notations are included for the songs in ‘Patiençe veinque tout’ that are of her own creation). In investigating the provenance of these songs, this article highlights the fact that women’s original texts, which are often overlooked, form a sizeable and significant body of musical literature from the early modern period. The two complementary practices of writing and performance paint an intimate portrait of women’s confessional and personal identity and the role music played in forming this identity, while also reflecting broader cross-confessional trends towards spiritual interiority and personal piety in the seventeenth century."
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Pinheiro, Antonio. "JPEG Column: 95th JPEG Meeting". ACM SIGMultimedia Records 14, n.º 2 (junho de 2022): 1. http://dx.doi.org/10.1145/3630653.3630657.

Texto completo da fonte
Resumo:
The 95 th JPEG meeting was held online from 25 to 29 April 2022. A Call for Proposals (CfP) was issued for JPEG Fake Media that aims at a standardisation framework for secure annotation of modifications in media assets. With this new initiative, JPEG endeavours to provide standardised means for the identification of the provenance of media assets that include imaging information. Assuring the provenance of the coded information is essential considering the current trends and possibilities on multimedia technology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

ZHAO, JING, VIKTOR K. PRASANNA e KARTHIK GOMADAM. "A SEMANTIC-BASED APPROACH FOR HANDLING INCOMPLETE AND INACCURATE PROVENANCE IN RESERVOIR ENGINEERING". International Journal of Semantic Computing 05, n.º 04 (dezembro de 2011): 383–406. http://dx.doi.org/10.1142/s1793351x11001304.

Texto completo da fonte
Resumo:
Provenance is becoming an important issue as a reliable estimator of data quality. However, provenance collection mechanisms in the reservoir engineering domain often result in incomplete provenance information. In this paper, we address the problem of predicting missing provenance information in reservoir engineering. Based on the observation that data items with specific semantic “connections” may share the same provenance, our approach annotates data items with domain entities defined in a domain ontology, and represent these “connections” as sequences of relationships (also known as semantic associations) in the ontology graph. By analyzing annotated historical datasets with complete provenance information, we capture semantic associations that may imply identical provenance. A statistical analysis is applied to assign probability values to the discovered associations, which indicate the confidence of each association when it is used for future provenance prediction. We develop a voting algorithm which utilizes the semantic associations and their confidence measures to predict the missing provenance information. Because the existing provenance information can be incorrect due to errors during the manual provenance annotation procedure, as an extension of the voting algorithm, we further design an algorithm for prediction which takes into account both the confidence measures of semantic associations and the accuracy of the existing provenance. A probability value is calculated as the trust of each prediction result. We develop the ProPSA (Provenance Prediction based on Semantic Associations) system which uses our proposed approaches to handle incomplete and inaccurate provenance information in reservoir engineering. Our evaluation shows that the average precision of our approach is above 85% when one-third of the provenance information is missing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Zhang, Yidan, Barrett Ens, Kadek Ananta Satriadi, Ying Yang e Sarah Goodwin. "Embodied Provenance for Immersive Sensemaking". Proceedings of the ACM on Human-Computer Interaction 7, ISS (31 de outubro de 2023): 198–216. http://dx.doi.org/10.1145/3626471.

Texto completo da fonte
Resumo:
Immersive analytics research has explored how embodied data representations and interactions can be used to engage users in sensemaking. Prior research has broadly overlooked the potential of immersive space for supporting analytic provenance, the understanding of sensemaking processes through users’ interaction histories. We propose the concept of embodied provenance, the use of three-dimensional space and embodied interactions in supporting recalling, reproducing, annotating and sharing analysis history in immersive environments. We design a conceptual framework for embodied provenance by highlighting a set of design criteria for analytic provenance drawn from prior work and identifying essential properties for embodied provenance. We develop a prototype system in virtual reality to demonstrate the concept and support the conceptual framework by providing multiple data views and embodied interaction metaphors in a large virtual space. We present a use case scenario of energy consumption analysis and evaluated the system through a qualitative evaluation with 17 participants, which show the system’s potential for assisting analytic provenance using embodiment. Our exploration of embodied provenance through this prototype provides lessons learnt to guide the design of immersive analytic tools for embodied provenance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

McGuffie, Matthew J., e Jeffrey E. Barrick. "pLannotate: engineered plasmid annotation". Nucleic Acids Research 49, W1 (21 de maio de 2021): W516—W522. http://dx.doi.org/10.1093/nar/gkab374.

Texto completo da fonte
Resumo:
Abstract Engineered plasmids are widely used in the biological sciences. Since many plasmids contain DNA sequences that have been reused and remixed by researchers for decades, annotation of their functional elements is often incomplete. Missing information about the presence, location, or precise identity of a plasmid feature can lead to unintended consequences or failed experiments. Many engineered plasmids contain sequences—such as recombinant DNA from all domains of life, wholly synthetic DNA sequences, and engineered gene expression elements—that are not predicted by microbial genome annotation pipelines. Existing plasmid annotation tools have limited feature libraries and do not detect incomplete fragments of features that are present in many plasmids for historical reasons and may impact their newly designed functions. We created the open source pLannotate web server so users can quickly and comprehensively annotate plasmid features. pLannotate is powered by large databases of genetic parts and proteins. It employs a filtering algorithm to display only the most relevant feature matches and also reports feature fragments. Finally, pLannotate displays a graphical map of the annotated plasmid, explains the provenance of each feature prediction, and allows results to be downloaded in a variety of formats. The webserver for pLannotate is accessible at: http://plannotate.barricklab.org/
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Mondelli, Maria Luiza, Thiago Magalhães, Guilherme Loss, Michael Wilde, Ian Foster, Marta Mattoso, Daniel Katz et al. "BioWorkbench: a high-performance framework for managing and analyzing bioinformatics experiments". PeerJ 6 (29 de agosto de 2018): e5551. http://dx.doi.org/10.7717/peerj.5551.

Texto completo da fonte
Resumo:
Advances in sequencing techniques have led to exponential growth in biological data, demanding the development of large-scale bioinformatics experiments. Because these experiments are computation- and data-intensive, they require high-performance computing techniques and can benefit from specialized technologies such as Scientific Workflow Management Systems and databases. In this work, we present BioWorkbench, a framework for managing and analyzing bioinformatics experiments. This framework automatically collects provenance data, including both performance data from workflow execution and data from the scientific domain of the workflow application. Provenance data can be analyzed through a web application that abstracts a set of queries to the provenance database, simplifying access to provenance information. We evaluate BioWorkbench using three case studies: SwiftPhylo, a phylogenetic tree assembly workflow; SwiftGECKO, a comparative genomics workflow; and RASflow, a RASopathy analysis workflow. We analyze each workflow from both computational and scientific domain perspectives, by using queries to a provenance and annotation database. Some of these queries are available as a pre-built feature of the BioWorkbench web application. Through the provenance data, we show that the framework is scalable and achieves high-performance, reducing up to 98% of the case studies execution time. We also show how the application of machine learning techniques can enrich the analysis process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Barhamgi, Mahmoud, e Elisa Bertino. "Editorial: Special Issue on Data Transparency—Data Quality, Annotation, and Provenance". Journal of Data and Information Quality 14, n.º 1 (31 de março de 2022): 1–3. http://dx.doi.org/10.1145/3494454.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Wu, P. H. J., A. K. H. Heok e I. P. Tamsir. "Annotating Web archives—structure, provenance, and context through archival cataloguing". New Review of Hypermedia and Multimedia 13, n.º 1 (julho de 2007): 55–75. http://dx.doi.org/10.1080/13614560701423620.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Li, Jinyang, Yuval Moskovitch, Julia Stoyanovich e H. V. Jagadish. "Query Refinement for Diversity Constraint Satisfaction". Proceedings of the VLDB Endowment 17, n.º 2 (outubro de 2023): 106–18. http://dx.doi.org/10.14778/3626292.3626295.

Texto completo da fonte
Resumo:
Diversity, group representation, and similar needs often apply to query results, which in turn require constraints on the sizes of various subgroups in the result set. Traditional relational queries only specify conditions as part of the query predicate(s), and do not support such restrictions on the output. In this paper, we study the problem of modifying queries to have the result satisfy constraints on the sizes of multiple subgroups in it. This problem, in the worst case, cannot be solved in polynomial time. Yet, with the help of provenance annotation, we are able to develop a query refinement method that works quite efficiently, as we demonstrate through extensive experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Bell, Michael J., Matthew Collison e Phillip Lord. "Can Inferred Provenance and Its Visualisation Be Used to Detect Erroneous Annotation? A Case Study Using UniProtKB". PLoS ONE 8, n.º 10 (15 de outubro de 2013): e75541. http://dx.doi.org/10.1371/journal.pone.0075541.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Li, Xinman, Min Jiang, Junjie Ren, Zhaohua Liu, Wanying Zhang, Guifen Li, Jinmao Wang e Minsheng Yang. "Transcriptomic Determination of the Core Genes Regulating the Growth and Physiological Traits of Quercus mongolica Fisch. ex Ledeb". Forests 14, n.º 7 (26 de junho de 2023): 1313. http://dx.doi.org/10.3390/f14071313.

Texto completo da fonte
Resumo:
Quercus mongolica is a multipurpose forest species of high economic value that also plays an important role in the maintenance and protection of its environment. Consistent with the wide geographical distribution of Q. mongolica, differences in the growth and physiological traits of populations of different provenances have been identified. In this study, the molecular basis for these differences was investigated by examining the growth, physiological traits, and gene expression of Q. mongolica seedlings from six provenances in northern China. The results showed that there were significant differences in growth and physiological traits, except for the ground diameter (p < 0.05), and identified abscisic acid (ABA), indole-3-acetic acid (IAA), and soluble sugar contents as important physiological traits that distinguish Q. mongolica of different provenances. The transcriptome analysis showed that the largest difference in the total number of differentially expressed genes (DEGs) was between trees from Jilin and Shandong (6918), and the smallest difference was between trees from Heilongjiang and Liaoning (1325). The DEGs were concentrated mainly in the Gene Ontology entries of metabolic process, catalytic activity, and cell, and in the Kyoto Encyclopedia of Genes and Genomes metabolic pathways of carbohydrate metabolism, biosynthesis of other secondary metabolites, signal transduction, and environmental adaptation. These assignments indicated that Q. mongolica populations of different provenances adapt to changes in climate and environment by regulating important physiological, biochemical, and metabolic processes. A weighted gene co-expression network analysis revealed highly significant correlations of the darkmagenta, grey60, turquoise, and plum1 modules with ABA content, IAA content, soluble sugar content, and soluble protein content, respectively. The co-expression network also indicated key roles for genes related to the stress response (SDH, WAK5, APA1), metabolic processes (UGT76A2, HTH, At5g42100, PEX11C), signal transduction (INPS1, HSD1), and chloroplast biosynthesis (CAB13, PTAC16, PNSB5). Functional annotation of these core genes implies that Q. mongolica can adapt to different environments by regulating photosynthesis, plant hormone signal transduction, the stress response, and other key physiological and biochemical processes. Our results provide insight into the adaptability of plants to different environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Spasic, Irena, e Goran Nenadic. "Clinical Text Data in Machine Learning: Systematic Review". JMIR Medical Informatics 8, n.º 3 (31 de março de 2020): e17984. http://dx.doi.org/10.2196/17984.

Texto completo da fonte
Resumo:
Background Clinical narratives represent the main form of communication within health care, providing a personalized account of patient history and assessments, and offering rich information for clinical decision making. Natural language processing (NLP) has repeatedly demonstrated its feasibility to unlock evidence buried in clinical narratives. Machine learning can facilitate rapid development of NLP tools by leveraging large amounts of text data. Objective The main aim of this study was to provide systematic evidence on the properties of text data used to train machine learning approaches to clinical NLP. We also investigated the types of NLP tasks that have been supported by machine learning and how they can be applied in clinical practice. Methods Our methodology was based on the guidelines for performing systematic reviews. In August 2018, we used PubMed, a multifaceted interface, to perform a literature search against MEDLINE. We identified 110 relevant studies and extracted information about text data used to support machine learning, NLP tasks supported, and their clinical applications. The data properties considered included their size, provenance, collection methods, annotation, and any relevant statistics. Results The majority of datasets used to train machine learning models included only hundreds or thousands of documents. Only 10 studies used tens of thousands of documents, with a handful of studies utilizing more. Relatively small datasets were utilized for training even when much larger datasets were available. The main reason for such poor data utilization is the annotation bottleneck faced by supervised machine learning algorithms. Active learning was explored to iteratively sample a subset of data for manual annotation as a strategy for minimizing the annotation effort while maximizing the predictive performance of the model. Supervised learning was successfully used where clinical codes integrated with free-text notes into electronic health records were utilized as class labels. Similarly, distant supervision was used to utilize an existing knowledge base to automatically annotate raw text. Where manual annotation was unavoidable, crowdsourcing was explored, but it remains unsuitable because of the sensitive nature of data considered. Besides the small volume, training data were typically sourced from a small number of institutions, thus offering no hard evidence about the transferability of machine learning models. The majority of studies focused on text classification. Most commonly, the classification results were used to support phenotyping, prognosis, care improvement, resource management, and surveillance. Conclusions We identified the data annotation bottleneck as one of the key obstacles to machine learning approaches in clinical NLP. Active learning and distant supervision were explored as a way of saving the annotation efforts. Future research in this field would benefit from alternatives such as data augmentation and transfer learning, or unsupervised learning, which do not require data annotation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Angier, Kate. "‘Reading’ Textbooks". Annals of Social Studies Education Research for Teachers 3, n.º 2 (19 de agosto de 2022): 37–47. http://dx.doi.org/10.29173/assert44.

Texto completo da fonte
Resumo:
This paper demonstrates how history textbooks can be used in high school classrooms as ‘primary’ as well as ‘secondary’ sources, to develop learners as critical and curious readers of history. History textbooks, like any other historical account, are a form of discourse which present a selected and ideologically constructed interpretation of the past; however, school learners tend to view them uncritically as 'the truth'. Simple strategies of ‘annotation and tabulation’ provide scaffolding which enable learners to deconstruct the textbook extracts (literally and figuratively) and identify the similarities and differences between accounts given of the same event. This in turn make more visible the ideological construction of school textbooks and the authorial positionality of the writers, encouraging learners to ask questions about their provenance and purpose. The classroom activities described in this article encourage learners to consider the effect and affect of telling the stories of the past in different ways, and help them to develop their disciplinary skills of reading and thinking like a historian.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Dumontier, Michel, Alasdair J. G. Gray, M. Scott Marshall, Vladimir Alexiev, Peter Ansell, Gary Bader, Joachim Baran et al. "The health care and life sciences community profile for dataset descriptions". PeerJ 4 (16 de agosto de 2016): e2331. http://dx.doi.org/10.7717/peerj.2331.

Texto completo da fonte
Resumo:
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

McGuffie, Matthew J., e Jeffrey E. Barrick. "Identifying widespread and recurrent variants of genetic parts to improve annotation of engineered DNA sequences". PLOS ONE 19, n.º 5 (28 de maio de 2024): e0304164. http://dx.doi.org/10.1371/journal.pone.0304164.

Texto completo da fonte
Resumo:
Engineered plasmids have been workhorses of recombinant DNA technology for nearly half a century. Plasmids are used to clone DNA sequences encoding new genetic parts and to reprogram cells by combining these parts in new ways. Historically, many genetic parts on plasmids were copied and reused without routinely checking their DNA sequences. With the widespread use of high-throughput DNA sequencing technologies, we now know that plasmids often contain variants of common genetic parts that differ slightly from their canonical sequences. Because the exact provenance of a genetic part on a particular plasmid is usually unknown, it is difficult to determine whether these differences arose due to mutations during plasmid construction and propagation or due to intentional editing by researchers. In either case, it is important to understand how the sequence changes alter the properties of the genetic part. We analyzed the sequences of over 50,000 engineered plasmids using depositor metadata and a metric inspired by the natural language processing field. We detected 217 uncatalogued genetic part variants that were especially widespread or were likely the result of convergent evolution or engineering. Several of these uncatalogued variants are known mutants of plasmid origins of replication or antibiotic resistance genes that are missing from current annotation databases. However, most are uncharacterized, and 3/5 of the plasmids we analyzed contained at least one of the uncatalogued variants. Our results include a list of genetic parts to prioritize for refining engineered plasmid annotation pipelines, highlight widespread variants of parts that warrant further investigation to see whether they have altered characteristics, and suggest cases where unintentional evolution of plasmid parts may be affecting the reliability and reproducibility of science.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Gehlert, Finn O., Katrin Weidenbach, Brian Barüske, Daniela Hallack, Urska Repnik e Ruth A. Schmitz. "Newly Established Genetic System for Functional Analysis of MetSV". International Journal of Molecular Sciences 24, n.º 13 (6 de julho de 2023): 11163. http://dx.doi.org/10.3390/ijms241311163.

Texto completo da fonte
Resumo:
The linear chromosome of the Methanosarcina spherical virus with 10,567 bp exhibits 22 ORFs with mostly unknown functions. Annotation using common tools and databases predicted functions for a few genes like the type B DNA polymerase (MetSVORF07) or the small (MetSVORF15) and major (MetSVORF16) capsid proteins. For verification of assigned functions of additional ORFs, biochemical or genetic approaches were found to be essential. Consequently, we established a genetic system for MetSV by cloning its genome into the E. coli plasmid pCR-XL-2. Comparisons of candidate plasmids with the MetSV reference based on Nanopore sequencing revealed several mutations of yet unknown provenance with an impact on protein-coding sequences. Linear MetSV inserts were generated by BamHI restriction, purified and transformed in Methanosarcina mazei by an optimized liposome-mediated transformation protocol. Analysis of resulting MetSV virions by TEM imaging and infection experiments demonstrated no significant differences between plasmid-born viruses and native MetSV particles regarding their morphology or lytic behavior. The functionality of the genetic system was tested by the generation of a ΔMetSVORF09 mutant that was still infectious. Our genetic system of MetSV, the first functional system for a virus of methanoarchaea, now allows us to obtain deeper insights into MetSV protein functions and virus-host interactions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Bagheri, Hamid, Andrew J. Severin e Hridesh Rajan. "Detecting and correcting misclassified sequences in the large-scale public databases". Bioinformatics 36, n.º 18 (24 de junho de 2020): 4699–705. http://dx.doi.org/10.1093/bioinformatics/btaa586.

Texto completo da fonte
Resumo:
Abstract Motivation As the cost of sequencing decreases, the amount of data being deposited into public repositories is increasing rapidly. Public databases rely on the user to provide metadata for each submission that is prone to user error. Unfortunately, most public databases, such as non-redundant (NR), rely on user input and do not have methods for identifying errors in the provided metadata, leading to the potential for error propagation. Previous research on a small subset of the NR database analyzed misclassification based on sequence similarity. To the best of our knowledge, the amount of misclassification in the entire database has not been quantified. We propose a heuristic method to detect potentially misclassified taxonomic assignments in the NR database. We applied a curation technique and quality control to find the most probable taxonomic assignment. Our method incorporates provenance and frequency of each annotation from manually and computationally created databases and clustering information at 95% similarity. Results We found more than two million potentially taxonomically misclassified proteins in the NR database. Using simulated data, we show a high precision of 97% and a recall of 87% for detecting taxonomically misclassified proteins. The proposed approach and findings could also be applied to other databases. Availability and implementation Source code, dataset, documentation, Jupyter notebooks and Docker container are available at https://github.com/boalang/nr. Supplementary information Supplementary data are available at Bioinformatics online.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Gonzalez-Gil, Pedro, Juan Antonio Martinez e Antonio F. Skarmeta. "Lightweight Data-Security Ontology for IoT". Sensors 20, n.º 3 (1 de fevereiro de 2020): 801. http://dx.doi.org/10.3390/s20030801.

Texto completo da fonte
Resumo:
Although current estimates depict steady growth in Internet of Things (IoT), many works portray an as yet immature technology in terms of security. Attacks using low performance devices, the application of new technologies and data analysis to infer private data, lack of development in some aspects of security offer a wide field for improvement. The advent of Semantic Technologies for IoT offers a new set of possibilities and challenges, like data markets, aggregators, processors and search engines, which rise the need for security. New regulations, such as GDPR, also call for novel approaches on data-security, covering personal data. In this work, we present DS4IoT, a data-security ontology for IoT, which covers the representation of data-security concepts with the novel approach of doing so from the perspective of data and introducing some new concepts such as regulations, certifications and provenance, to classical concepts such as access control methods and authentication mechanisms. In the process we followed ontological methodologies, as well as semantic web best practices, resulting in an ontology to serve as a common vocabulary for data annotation that not only distinguishes itself from previous works by its bottom-up approach, but covers new, current and interesting concepts of data-security, favouring implicit over explicit knowledge representation. Finally, this work is validated by proof of concept, by mapping the DS4IoT ontology to the NGSI-LD data model, in the frame of the IoTCrawler EU project.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Gouripeddi, Ram, Katherine Sward, Mollie Cummins, Karen Eilbeck, Bernie LaSalle e Julio C. Facelli. "4549 Reproducible Informatics for Reproducible Translational Research". Journal of Clinical and Translational Science 4, s1 (junho de 2020): 66–67. http://dx.doi.org/10.1017/cts.2020.221.

Texto completo da fonte
Resumo:
OBJECTIVES/GOALS: Characterize formal informatics methods and approaches for enabling reproducible translational research. Education of reproducible methods to translational researchers and informaticians. METHODS/STUDY POPULATION: We performed a scoping review [1] of selected informatics literature (e.g. [2,3]) from PubMed and Scopus. In addition we reviewed literature and documentation of translational research informatics projects [4–21] at the University of Utah. RESULTS/ANTICIPATED RESULTS: The example informatics projects we identified in our literature covered a broad spectrum of translational research. These include research recruitment, research data requisition, study design and statistical analysis, biomedical vocabularies and metadata for data integration, data provenance and quality, and uncertainty. Elements impacting reproducibility of research include (1) Research Data: its semantics, quality, metadata and provenance; and (2) Research Processes: study conduct including activities and interventions undertaken, collections of biospecimens and data, and data integration. The informatics methods and approaches we identified as enablers of reproducibility include the use of templates, management of workflows and processes, scalable methods for managing data, metadata and semantics, appropriate software architectures and containerization, convergence methods and uncertainty quantification. In addition these methods need to be open and shareable and should be quantifiable to measure their ability to achieve reproducibility. DISCUSSION/SIGNIFICANCE OF IMPACT: The ability to collect large volumes of data collection has ballooned in nearly every area of science, while the ability to capturing research processes hasn’t kept with this pace. Potential for problematic research practices and irreproducible results are concerns.Reproducibility is a core essentially of translational research. Translational research informatics provides methods and means for enabling reproducibility and FAIRness [22] in translational research. In addition there is a need for translational informatics itself to be reproducible to make research reproducible so that methods developed for one study or biomedical domain can be applied elsewhere. Such informatics research and development requires a mindset for meta-research [23].The informatics methods we identified covers the spectrum of reproducibility (computational, empirical and statistical) and across different levels of reproducibility (reviewable, replicable, confirmable, auditable, and open or complete) [24–29]. While there are existing and ongoing efforts in developing informatics methods for translational research reproducibility in Utah and elsewhere, there is a need to further develop formal informatics methods and approaches: the Informatics of Research Reproducibility.In this presentation, we summarize the studies and literature we identified and discuss our key findings and gaps in informatics methods for research reproducibility. We conclude by discussing how we are covering these topics in a translational research informatics course.1.Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014 Dec;5(4):371–85.2.McIntosh LD, Juehne A, Vitale CRH, Liu X, Alcoser R, Lukas JC, Evanoff B. Repeat: a framework to assess empirical reproducibility in biomedical research. BMC Med Res Methodol [Internet]. 2017 Sep 18 [cited 2018 Nov 30];17. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5604503/3.Denaxas S, Direk K, Gonzalez-Izquierdo A, Pikoula M, Cakiroglu A, Moore J, Hemingway H, Smeeth L. Methods for enhancing the reproducibility of biomedical research findings using electronic health records. BioData Min. 2017;10:31.4.Burnett N, Gouripeddi R, Wen J, Mo P, Madsen R, Butcher R, Sward K, Facelli JC. Harmonization of Sensor Metadata and Measurements to Support Exposomic Research. In: 2016 International Society of Exposure Science [Internet]. Research Triangle Park, NC, USA; 2017 [cited 2017 Jun 17]. Available from: http://www.intlexposurescience.org/ISES20175.Butcher R, Gouripeddi RK, Madsen R, Mo P, LaSalle B. CCTS Biomedical Informatics Core Research Data Service. In Salt Lake City; 2016.6.Cummins M, Gouripeddi R, Facelli J. A low-cost, low-barrier clinical trials registry to support effective recruitment. In Salt Lake City, Utah, USA; 2016 [cited 2018 Nov 30]. Available from: //campusguides.lib.utah.edu/UtahRR16/abstracts7.Gouripeddi R, Warner P, Madsen R, Mo P, Burnett N, Wen J, Lund A, Butcher R, Cummins MR, Facelli J, Sward K. An Infrastructure for Reproducibile Exposomic Research. In: Research Reproducibility 2016 [Internet]. Salt Lake City, Utah, USA; 2016 [cited 2018 Nov 30]. Available from: //campusguides.lib.utah.edu/UtahRR16/abstracts8.Eilbeck K, Lewis SE, Mungall CJ, Yandell M, Stein L, Durbin R, Ashburner M. The Sequence Ontology: a tool for the unification of genome annotations. Genome Biol. 2005;6:R44.9.Gouripeddi R, Cummins M, Madsen R, LaSalle B, Redd AM, Presson AP, Ye X, Facelli JC, Green T, Harper S. Streamlining study design and statistical analysis for quality improvement and research reproducibility. J Clin Transl Sci. 2017 Sep;1(S1):18–9.10.Gouripeddi R, Eilbeck K, Cummins M, Sward K, LaSalle B, Peterson K, Madsen R, Warner P, Dere W, Facelli JC. A Conceptual Architecture for Reproducible On-demand Data Integration for Complex Diseases. In: Research Reproducibility 2016 (UtahRR16) [Internet]. Salt Lake City, Utah, USA; 2016 [cited 2017 Apr 25]. Available from: https://zenodo.org/record/16806711.Gouripeddi R, Lane E, Madsen R, Butcher R, LaSalle B, Sward K, Fritz J, Facelli JC, Cummins M, Shao J, Singleton R. Towards a scalable informatics platform for enhancing accrual into clinical research studies. J Clin Transl Sci. 2017 Sep;1(S1):20–20.12.Gouripeddi R, Deka R, Reese T, Butcher R, Martin B, Talbert J, LaSalle B, Facelli J, Brixner D. Reproducibility of Electronic Health Record Research Data Requests. In Washington, DC, USA; 2018 [cited 2018 Apr 21]. Available from: https://zenodo.org/record/1226602#.WtvvyZch27013.Gouripeddi R, Mo P, Madsen R, Warner P, Butcher R, Wen J, Shao J, Burnett N, Rajan NS, LaSalle B, Facelli JC. A Framework for Metadata Management and Automated Discovery for Heterogeneous Data Integration. In: 2016 BD2K All Hands Meeting [Internet]. Bethesda, MD; November 29-30 [cited 2017 Apr 25]. Available from: https://zenodo.org/record/16788514.Groat D, Gouripeddi R, Lin YK, Dere W, Murray M, Madsen R, Gestaland P, Facelli J. Identification of High-Level Formalisms that Support Translational Research Reproducibility. In: Research Reproducibility 2018 [Internet]. Salt Lake City, Utah, USA; 2018 [cited 2018 Oct 30]. Available from: //campusguides.lib.utah.edu/UtahRR18/abstracts15.Huser V, Kahn MG, Brown JS, Gouripeddi R. Methods for examining data quality in healthcare integrated data repositories. Pac Symp Biocomput Pac Symp Biocomput. 2018;23:628–33.16.Lund A, Gouripeddi R, Burnett N, Tran L-T, Mo P, Madsen R, Cummins M, Sward K, Facelli J. Enabling Reproducible Computational Modeling: The Utah PRISMS Ecosystem. In Salt Lake City, Utah, USA; 2018 [cited 2018 Oct 30]. Available from: //campusguides.lib.utah.edu/UtahRR18/abstracts17.Pflieger LT, Mason CC, Facelli JC. Uncertainty quantification in breast cancer risk prediction models using self-reported family health history. J Clin Transl Sci. 2017 Feb;1(1):53–9.18.Shao J, Gouripeddi R, Facelli J. Improving Clinical Trial Research Reproducibility using Reproducible Informatics Methods. In Salt Lake City, Utah, USA; 2018 [cited 2018 Oct 30]. Available from: //campusguides.lib.utah.edu/UtahRR18/abstracts19.Shao J, Gouripeddi R, Facelli JC. Semantic characterization of clinical trial descriptions from ClincalTrials.gov and patient notes from MIMIC-III. J Clin Transl Sci. 2017 Sep;1(S1):12–12.20.Tiase V, Gouripeddi R, Burnett N, Butcher R, Mo P, Cummins M, Sward K. Advancing Study Metadata Models to Support an Exposomic Informatics Infrastructure. In Ottawa, Canada; 2018 [cited 2018 Oct 30]. Available from: = http://www.eiseverywhere.com/ehome/294696/638649/?&t=8c531cecd4bb0a5efc6a0045f5bec0c321.Wen J, Gouripeddi R, Facelli JC. Metadata Discovery of Heterogeneous Biomedical Datasets Using Token-Based Features. In: IT Convergence and Security 2017 [Internet]. Springer, Singapore; 2017 [cited 2017 Sep 6]. p. 60–7. (Lecture Notes in Electrical Engineering). Available from: https://link.springer.com/chapter/10.1007/978-981-10-6451-7_822.Wilkinson MD, Dumontier M, Aalbersberg IjJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, da Silva Santos LB, Bourne PE, Bouwman J, Brookes AJ, Clark T, Crosas M, Dillo I, Dumon O, Edmunds S, Evelo CT, Finkers R, Gonzalez-Beltran A, Gray AJG, Groth P, Goble C, Grethe JS, Heringa J, ’t Hoen PAC, Hooft R, Kuhn T, Kok R, Kok J, Lusher SJ, Martone ME, Mons A, Packer AL, Persson B, Rocca-Serra P, Roos M, van Schaik R, Sansone S-A, Schultes E, Sengstag T, Slater T, Strawn G, Swertz MA, Thompson M, van der Lei J, van Mulligen E, Velterop J, Waagmeester A, Wittenburg P, Wolstencroft K, Zhao J, Mons B. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016 Mar 15;3:160018.23.Ioannidis JPA. Meta-research: Why research on research matters. PLOS Biol. 2018 Mar 13;16(3):e2005468.24.Stodden V, Borwein J, Bailey DH. Setting the default to reproducible. Comput Sci Res SIAM News. 2013;46(5):4–6.25.Stodden V, McNutt M, Bailey DH, Deelman E, Gil Y, Hanson B, Heroux MA, Ioannidis JPA, Taufer M. Enhancing reproducibility for computational methods. Science. 2016 Dec 9;354(6317):1240–1.26.Stodden V, McNutt M, Bailey DH, Deelman E, Gil Y, Hanson B, Heroux MA, Ioannidis JPA, Taufer M. Enhancing reproducibility for computational methods. Science. 2016 Dec 9;354(6317):1240–1.27.Stodden V. Reproducible Research for Scientific Computing: Tools and Strategies for Changing the Culture. Comput Sci Eng. 2012 Jul 1;14(4):13–7.28.Baker M. Muddled meanings hamper efforts to fix reproducibility crisis. Nat News Available from: http://www.nature.com/news/muddled-meanings-hamper-efforts-to-fix-reproducibility-crisis-1.2007629.Barba LA. Terminologies for Reproducible Research. ArXiv180203311 Cs 2018 Feb 9; Available from: http://arxiv.org/abs/1802.03311
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Chung, Arlene, Haiwei Chen, Grace Shin, Ketan Mane e Hye-Chung Kum. "2465". Journal of Clinical and Translational Science 1, S1 (setembro de 2017): 18. http://dx.doi.org/10.1017/cts.2017.77.

Texto completo da fonte
Resumo:
OBJECTIVES/SPECIFIC AIMS: The promise and potential of connected personal health records (PHRs) has not come to fruition. This may be, in part, due to the lack of user-centered design and of a patient-centric approach to curating personal health data for use by patients. Co-design with end-users could help mitigate these issues by ensuring the software meets user’s needs, and also engages patients in informatics research. Our team partnered with patients with multiple chronic conditions to co-design a patient-centric PHR. This abstract will describe our experience with the co-design process, highlight functionalities desired by patients, and showcase the final prototype. METHODS/STUDY POPULATION: We conducted 3 design sessions (90 min per session) with patients as co-designers and employed an iterative process for software development. Patients were recruited from Chapel Hill and surrounding areas. The initial design session laid the foundation for future sessions, and began with brainstorming about what patients thought their ideal version of an engaging connected PHR would look like in terms of features and functionalities. After each software iteration, our entire design team, including our patient co-designers, was shown the prototype during a subsequent design session. Once the final prototype was developed, usability testing was conducted with patient participants. Our team then conducted a final design session to debrief about the final prototype. RESULTS/ANTICIPATED RESULTS: We started with an initial group of 12 patients (6 males) who all had diabetes and an additional comorbidity such as hypertension and hyperlipidemia. Age of participants ranged from 30 to 77 years with an average age of 56. The majority of participants were Caucasian with 1 Asian and 2 African Americans. Hemoglobin A1c values ranged from 6.0% to 9.2% with approximately half having A1c values less than the goal of 7.0%. Half the patients were aware of PHRs, majority had smartphones, and all participants had access to the Internet and used email. Two of the patients were retired engineers who had prior experience with software design. The other sessions had between 7 and 8 participants at each session, and 7 patients completed the 90-minute usability testing session. There was a core group of 7 patients who were engaged in the design and testing sessions throughout the entire 9-month study. Key features of the PHR that emerged from design sessions included the following: (1) allow for annotation of data by patients (particularly important for lab values like glucose or for physical activity); (2) calendars, to do list, and reminder functions should be linked so that an entry in one of these allows for auto-population of this data within the other sections; (3) notifications whenever new data from the electronic health record or other sources are pushed to the PHR account; (4) allow for drag and drop of photos of pills/medications taken via smartphone or from other sources so that medication list has photo of actual pills or pill bottle; (5) allow for patients to customize the order of sections in the PHR dashboard so that the sections most important to the individual patient can be displayed more prominently; (6) allow for notifications from pharmacies to be pushed to the PHR (eg, confirmation of receipt of prescription requests or alert that prescription is ready to pick up); and (7) graphical display of trends over time (patients would like to select the measures and time frames to plot for display). Patients cited the importance of data provenance so that patient-entered data Versus provider or electronic health record data could be easily differentiated. Patients also highlighted the importance of having this PHR be a “one-stop shop for all their health data” and to have meaningful data dashboards for the different types of information needed to comprehensively manage their health. Patients wished for a single PHR that could easily bring together data from multiple patient portal accounts to avoid having to manage multiple accounts and passwords. They felt that heat map displays such as those used on popular fitness tracking websites were not intuitive and that the color-coding made interpretation challenging. Participants noted that engagement in the design process made them feel that they contributed towards developing software that could not only positively impact them individually but others as well. Every patient indicated the desire to participate on future design projects. Of the 19 tasks evaluated during usability testing, only 5 tasks could not be completed (eg, adding exercise to the calendar, opening the heat map, etc.). Patients felt that the overall PHR design was clean and aesthetically pleasing. Most patients felt that the site was “pretty easy to use” (6 out of 7). The majority of participants would like to use this PHR in the future (5) and would recommend this PHR to their friends/family to use (6). DISCUSSION/SIGNIFICANCE OF IMPACT: Involving patients directly in the design process for creating a patient-centric connected PHR was essential to sustaining engagement throughout the software life cycle and to informing the design of features and functionalities desired by patients with chronic conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Cowan, Ann E., Pedro Mendes e Michael L. Blinov. "ModelBricks—modules for reproducible modeling improving model annotation and provenance". npj Systems Biology and Applications 5, n.º 1 (8 de outubro de 2019). http://dx.doi.org/10.1038/s41540-019-0114-3.

Texto completo da fonte
Resumo:
Abstract Most computational models in biology are built and intended for “single-use”; the lack of appropriate annotation creates models where the assumptions are unknown, and model elements are not uniquely identified. Simply recreating a simulation result from a publication can be daunting; expanding models to new and more complex situations is a herculean task. As a result, new models are almost always created anew, repeating literature searches for kinetic parameters, initial conditions and modeling specifics. It is akin to building a brick house starting with a pile of clay. Here we discuss a concept for building annotated, reusable models, by starting with small well-annotated modules we call ModelBricks. Curated ModelBricks, accessible through an open database, could be used to construct new models that will inherit ModelBricks annotations and thus be easier to understand and reuse. Key features of ModelBricks include reliance on a commonly used standard language (SBML), rule-based specification describing species as a collection of uniquely identifiable molecules, association with model specific numerical parameters, and more common annotations. Physical bricks can vary substantively; likewise, to be useful the structure of ModelBricks must be highly flexible—it should encapsulate mechanisms from single reactions to multiple reactions in a complex process. Ultimately, a modeler would be able to construct large models by using multiple ModelBricks, preserving annotations and provenance of model elements, resulting in a highly annotated model. We envision the library of ModelBricks to rapidly grow from community contributions. Persistent citable references will incentivize model creators to contribute new ModelBricks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia