To see the other types of publications on this topic, follow the link: IE. Data and metadata structures.

Journal articles on the topic 'IE. Data and metadata structures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'IE. Data and metadata structures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fong, Joseph, Qing Li, and Shi-Ming Huang. "Universal Data Warehousing Based on a Meta-Data Modeling Approach." International Journal of Cooperative Information Systems 12, no. 03 (September 2003): 325–63. http://dx.doi.org/10.1142/s0218843003000772.

Full text
Abstract:
Data warehouse contains vast amount of data to support complex queries of various Decision Support Systems (DSSs). It needs to store materialized views of data, which must be available consistently and instantaneously. Using a frame metadata model, this paper presents an architecture of a universal data warehousing with different data models. The frame metadata model represents the metadata of a data warehouse, which structures an application domain into classes, and integrates schemas of heterogeneous databases by capturing their semantics. A star schema is derived from user requirements based on the integrated schema, catalogued in the metadata, which stores the schema of relational database (RDB) and object-oriented database (OODB). Data materialization between RDB and OODB is achieved by unloading source database into sequential file and reloading into target database, through which an object relational view can be defined so as to allow the users to obtain the same warehouse view in different data models simultaneously. We describe our procedures of building the relational view of star schema by multidimensional SQL query, and the object oriented view of the data warehouse by Online Analytical Processing (OLAP) through method call, derived from the integrated schema. To validate our work, an application prototype system has been developed in a product sales data warehousing domain based on this approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Et.al, Nur Adila Azram. "Laboratory Instruments’ Produced Scientific Data Standardization through the Use of Metadata." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 2146–51. http://dx.doi.org/10.17762/turcomat.v12i3.1157.

Full text
Abstract:
The progression of scientific data from various laboratory instruments is increasing these days. As different laboratory instruments hold different structures and formats of data, it became a concern in the management and analysis of data because of the heterogeneity of data structure and format. This paper offered a metadata structure to standardize the laboratory instruments' -produced scientific data to attain a standard structure and format. This paper contains explanation regarding the methodology and the use of proposed metadata structure, before summarizing the implementation and its related result analysis. The proposed metadata structure extraction shows promising results based on conducted evaluation and validation.
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Jian, Jeff Hemsley, and Sarah E. Bratt. "The structural shift and collaboration capacity in GenBank Networks: A longitudinal study." Quantitative Science Studies 3, no. 1 (2022): 174–93. http://dx.doi.org/10.1162/qss_a_00181.

Full text
Abstract:
Abstract Metadata in scientific data repositories such as GenBank contain links between data submissions and related publications. As a new data source for studying collaboration networks, metadata in data repositories compensate for the limitations of publication-based research on collaboration networks. This paper reports the findings from a GenBank metadata analytics project. We used network science methods to uncover the structures and dynamics of GenBank collaboration networks from 1992–2018. The longitudinality and large scale of this data collection allowed us to unravel the evolution history of collaboration networks and identify the trend of flattening network structures over time and optimal assortative mixing range for enhancing collaboration capacity. By incorporating metadata from the data production stage with the publication stage, we uncovered new characteristics of collaboration networks as well as developed new metrics for assessing the effectiveness of enablers of collaboration—scientific and technical human capital, cyberinfrastructure, and science policy.
APA, Harvard, Vancouver, ISO, and other styles
4

Vanags, Mikus, and Rudite Cevere. "Type Safe Metadata Combining." Computer and Information Science 10, no. 2 (April 30, 2017): 97. http://dx.doi.org/10.5539/cis.v10n2p97.

Full text
Abstract:
Type safety is an important property of any type system. Modern programming languages support different mechanisms to work in type safe manner, e.g., properties, methods, events, attributes (annotations) and other structures. Some programming languages allow access to metadata: type information, type member information and information about applied attributes. But none of the existing mainstream programming languages which support reflection provides fully type safe metadata combining mechanism built in the programming language. Combining of metadata means a class member metadata combining with data, type metadata and constraints. Existing solutions provide no or limited type safe metadata combining mechanism; they are complex and processed at runtime, which by definition is not built-in type-safe metadata combining. Problem can be solved by introducing syntax and methods for type safe metadata combining so that, metadata could be processed at compile time in a fully type safe way. Common metadata combining use cases are data abstraction layer creation and database querying.
APA, Harvard, Vancouver, ISO, and other styles
5

Foessel, Siegfried, and Heiko Sparenberg. "EN 17650 – The new standard for digital preservation of cinematographic works." Archiving Conference 2021, no. 1 (June 18, 2021): 43–46. http://dx.doi.org/10.2352/issn.2168-3204.2021.1.0.10.

Full text
Abstract:
EN 17650 is a proposed new European Standard for the digital preservation of cinematographic works. It allows organizing of content in a systematic way, the so called Cinema Preservation Package (CPP). The standard defines methods to store content in physical and logical structures and describes relationships and metadata for its components. The CPP uses existing XML schemes, in particular METS, EBUCore and PREMIS to store structural, descriptive, technical and provenance metadata. METS XML files with their core metadata contain physical and logical structures of the content, hash values and UUIDs to ensure data integrity and links to external metadata files to enrich the content with additional information. The content itself is stored based on existing public and industry standards, avoiding unnecessary conversion steps. The paper explains the concepts behind the new standard and specifies the usage and combinations of existing schemes with newly introduced metadata parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Canning, Erin, Susan Brown, Sarah Roger, and Kimberley Martin. "The Power to Structure." KULA: Knowledge Creation, Dissemination, and Preservation Studies 6, no. 3 (July 27, 2022): 1–15. http://dx.doi.org/10.18357/kula.169.

Full text
Abstract:
Information systems are developed by people with intent—they are designed to help creators and users tell specific stories with data. Within information systems, the often invisible structures of metadata profoundly impact the meaning that can be derived from that data. The Linked Infrastructure for Networked Cultural Scholarship project (LINCS) helps humanities researchers tell stories by using linked open data to convert humanities datasets into organized, interconnected, machine-processable resources. LINCS provides context for online cultural materials, interlinks them, andgrounds them in sources to improve web resources for research. This article describes how the LINCS team is using the shared standards of linked data and especially ontologies—typically unseen yet powerful—to bring meaning mindfully to metadata through structure. The LINCS metadata—comprised of linked open data about cultural artifacts, people, and processes—and the structures that support it must represent multiple, diverse ways of knowing. It needs to enable various means of incorporating contextual data and of telling stories with nuance and context, situated and supported by data structures that reflect and make space for specificities and complexities. As it addresses specificity in each research dataset, LINCS is simultaneously working to balance interoperability, as achieved through a level of generalization, with contextual and domain-specific requirements. The LINCS team’s approach to ontology adoption and use centers on intersectionality, multiplicity, and difference. The question of what meaning the structures being used will bring to the data is as important as what meaning is introduced as a result of linking data together, and the project has built this premise into its decision-making and implementation processes. To convey an understanding of categories and classification as contextually embedded—culturally produced, intersecting, and discursive—the LINCS team frames them not as fixed but as grounds for investigation and starting points for understanding. Metadata structures are as important as vocabularies for producing such meaning.
APA, Harvard, Vancouver, ISO, and other styles
7

López-Tello, Eva, and Salvador Mandujano. "PAQUETE camtrapR PARA GESTIONAR DATOS DE FOTO-TRAMPEO: APLICACIÓN EN LA RESERVA DE LA BIOSFERA TEHUACÁN-CUICATLÁN." Revista Mexicana de Mastozoología (Nueva Epoca) 1, no. 2 (December 14, 2017): 13. http://dx.doi.org/10.22201/ie.20074484e.2017.1.2.245.

Full text
Abstract:
ResumenEl empleo de cámaras trampa es un método que se ha popularizado en la última década debido al desarrollo tecnológico que ha hecho más accesible la adquisición de este equipo. Una de las ventajas de este método es que podemos obtener mucha información en poco tiempo de diferentes especies. Sin embargo, existen pocos programas que faciliten la organización y extracción de la información de una gran cantidad de imágenes. Recientemente se ha puesto disponible libremente el paquete R llamado camtrapR, el cual sirve para extraer los metadatos de las imágenes, crear tablas de registros independientes, registros de presencia/ausencia para ocupación, y gráficos espaciales. Para comprobar la funcionalidad del programa en este artículo presentamos seis ejemplos de las principales funciones de camtrapR. Para esto se utilizó un conjunto de imágenes obtenidas con 10 cámaras-trampa en una localidad de la Reserva de la Biosfera Tehuacán-Cuicatlán. camtrapR se aplicó para probar los siguientes objetivos: organización y manejo de las fotos, clasificación por especie, identificación individual, extracción de metadatos por especie y/o individuos, exploración y visualización de datos, y exportación de datos para análisis de ocupación. Está disponible libre el código R utilizado en este trabajo. De acuerdo a los resultados obtenidos se considera que camtrapR es un paquete eficiente para facilitar y reducir el tiempo de extracción de los metadatos de las imágenes; así mismo es posible obtener los registros independientes sin errores de omisión o duplicación de datos. Además, permite crear archivos *.csv que después pueden ser analizados con otros paquetes R o programas para otros propósitos.Palabras clave: base de datos, historias de captura, metadatos, R. AbstractThe camera-trap is a method that has become popular in the last decade due to the technological development that has made the acquisition of this equipment more accessible. One of the advantages of this method is that we can get a lot of information in a short time for different species. However, there are few programs that facilitate the organization and extraction of information from large number of images. Recently, the R package called camtrapR has been made freely available, which serves to extract the metadata from the images, create independent record tables, occupation presence/absence registers and spatial graphics. To check the functionality of this package, in this article we present six examples of how to use the main functions of camtrapR. For this purpose, we used a data set of images obtained with 10 cameras in the location of the Tehuacán-Cuicatlán Biosphere Reserve. camtrapR was applied to test the following objectives: organization and management of the photos, classification by species, individual identification, extraction of metadata by species and individuals, exploration and visualization of data, and export of data for analysis of occupation. The R code used in this work is available freely in line. According to our results, camtrapR is an efficient package to facilitate and reduce the extraction time of the metadata of the images; it is also possible to obtain the independent records without errors of omission or duplication of data. In addition, it allows to create * .csv files that can then be analyzed with other R packages or programs for different objectives.Key words: capture histories, database, metadata, organization, R.
APA, Harvard, Vancouver, ISO, and other styles
8

Hardesty, Juliet L. "Transitioning from XML to RDF: Considerations for an effective move towards Linked Data and the Semantic Web." Information Technology and Libraries 35, no. 1 (April 1, 2016): 51. http://dx.doi.org/10.6017/ital.v35i1.9182.

Full text
Abstract:
Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML) and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF). Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.
APA, Harvard, Vancouver, ISO, and other styles
9

Tilton, Lauren, Emeline Alexander, Luke Malcynsky, and Hanglin Zhou. "The Role of Metadata in American Studies." Polish Journal for American Studies, Issue 14 (Autumn 2020) (December 1, 2020): 149–63. http://dx.doi.org/10.7311/pjas.14/2/2020.02.

Full text
Abstract:
This article argues that metadata can animate rather than stall American Studies inquiry. Data about data can enable and expand the kinds of context, evidence, and interdisciplinary methodological approaches that American Studies can engage with while taking back data from the very power structures that the field aims to reveal, critique, and abolish. As a result, metadata can be a site where the field realizes its intellectual and political commitments. The article draws on a range of digital humanities projects, with a focus on projects created by the authors, that demonstrate the possibilities (and challenges) of metadata for American Studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Russell, Pamela H., and Debashis Ghosh. "Radtools: R utilities for smooth navigation of medical image data." F1000Research 7 (December 24, 2018): 1976. http://dx.doi.org/10.12688/f1000research.17139.1.

Full text
Abstract:
The radiology community has adopted several widely used standards for medical image files, including the popular DICOM (Digital Imaging and Communication in Medicine) and NIfTI (Neuroimaging Informatics Technology Initiative) standards. These file formats include image intensities as well as potentially extensive metadata. The NIfTI standard specifies a particular set of header fields describing the image and minimal information about the scan. DICOM headers can include any of >4,000 available metadata attributes spanning a variety of topics. NIfTI files contain all slices for an image series, while DICOM files capture single slices and image series are typically organized into a directory. Each DICOM file contains metadata for the image series as well as the individual image slice. The programming environment R is popular for data analysis due to its free and open code, active ecosystem of tools and users, and excellent system of contributed packages. Currently, many published radiological image analyses are performed with proprietary software or custom unpublished scripts. However, R is increasing in popularity in this area due to several packages for processing and analysis of image files. While these R packages handle image import and processing, no existing package makes image metadata conveniently accessible. Extracting image metadata, combining across slices, and converting to useful formats can be prohibitively cumbersome, especially for DICOM files. We present radtools, an R package for smooth navigation of medical image data. Radtools makes the problem of extracting image metadata trivially simple, providing simple functions to explore and return information in familiar R data structures. Radtools also facilitates extraction of image data and viewing of image slices. The package is freely available under the MIT license at https://github.com/pamelarussell/radtools and is easily installable from the Comprehensive R Archive Network (https://cran.r-project.org/package=radtools).
APA, Harvard, Vancouver, ISO, and other styles
11

Fugazza, Cristiano, Monica Pepe, Alessandro Oggioni, Paolo Tagliolato, and Paola Carrara. "Raising Semantics-Awareness in Geospatial Metadata Management." ISPRS International Journal of Geo-Information 7, no. 9 (September 7, 2018): 370. http://dx.doi.org/10.3390/ijgi7090370.

Full text
Abstract:
Geospatial metadata are often encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Semantic Web did not induce so far a consistent, interlinked baseline in the geospatial domain. Datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated in different systems, seldom consistently. We address these issues by relating metadata items to resources that represent keywords, institutes, researchers, toponyms, and virtually any RDF data structure made available over the Web via SPARQL endpoints. Essentially, our methodology fosters delegated metadata management as the entities referred to in metadata are independent, decentralized data structures with their own life cycle. Our example implementation of delegated metadata envisages: (i) editing via customizable web-based forms (including injection of semantic information); (ii) encoding of records in any XML metadata schema; and (iii) translation into RDF. Among the semantics-aware features that this practice enables, we present a worked-out example focusing on automatic update of metadata descriptions. Our approach, demonstrated in the context of INSPIRE metadata (the ISO 19115/19119 profile eliciting integration of European geospatial resources) is also applicable to a broad range of metadata standards, including non-geospatial ones.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Yaping. "Glowworm Swarm Optimization Algorithm- and K-Prototypes Algorithm-Based Metadata Tree Clustering." Mathematical Problems in Engineering 2021 (February 9, 2021): 1–10. http://dx.doi.org/10.1155/2021/8690418.

Full text
Abstract:
The main objective of this paper is to present a new clustering algorithm for metadata trees based on K-prototypes algorithm, GSO (glowworm swarm optimization) algorithm, and maximal frequent path (MFP). Metadata tree clustering includes computing the feature vector of the metadata tree and the feature vector clustering. Therefore, traditional data clustering methods are not suitable directly for metadata trees. As the main method to calculate eigenvectors, the MFP method also faces the difficulties of high computational complexity and loss of key information. Generally, the K-prototypes algorithm is suitable for clustering of mixed-attribute data such as feature vectors, but the K-prototypes algorithm is sensitive to the initial clustering center. Compared with other swarm intelligence algorithms, the GSO algorithm has more efficient global search advantages, which are suitable for solving multimodal problems and also useful to optimize the K-prototypes algorithm. To address the clustering of metadata tree structures in terms of clustering accuracy and high data dimension, this paper combines the GSO algorithm, K-prototypes algorithm, and MFP together to study and design a new metadata structure clustering method. Firstly, MFP is used to describe metadata tree features, and the key parameter of categorical data is introduced into the feature vector of MFP to improve the accuracy of the feature vector to describe the metadata tree; secondly, GSO is combined with K-prototypes to design GSOKP for clustering the feature vector that contains numeric data and categorical data so as to improve the clustering accuracy; finally, tests are conducted with a set of metadata trees. The experimental results show that the designed metadata tree clustering method GSOKP-FP has certain advantages in respect to clustering accuracy and time complexity.
APA, Harvard, Vancouver, ISO, and other styles
13

Su, Shian, Vincent J. Carey, Lori Shepherd, Matthew Ritchie, Martin T. Morgan, and Sean Davis. "BiocPkgTools: Toolkit for mining the Bioconductor package ecosystem." F1000Research 8 (May 29, 2019): 752. http://dx.doi.org/10.12688/f1000research.19410.1.

Full text
Abstract:
Motivation: The Bioconductor project, a large collection of open source software for the comprehension of large-scale biological data, continues to grow with new packages added each week, motivating the development of software tools focused on exposing package metadata to developers and users. The resulting BiocPkgTools package facilitates access to extensive metadata in computable form covering the Bioconductor package ecosystem, facilitating downstream applications such as custom reporting, data and text mining of Bioconductor package text descriptions, graph analytics over package dependencies, and custom search approaches. Results: The BiocPkgTools package has been incorporated into the Bioconductor project, installs using standard procedures, and runs on any system supporting R. It provides functions to load detailed package metadata, longitudinal package download statistics, package dependencies, and Bioconductor build reports, all in "tidy data" form. BiocPkgTools can convert from tidy data structures to graph structures, enabling graph-based analytics and visualization. An end-user-friendly graphical package explorer aids in task-centric package discovery. Full documentation and example use cases are included. Availability: The BiocPkgTools software and complete documentation are available from Bioconductor (https://bioconductor.org/packages/BiocPkgTools).
APA, Harvard, Vancouver, ISO, and other styles
14

Bogdanović, Miloš, Milena Frtunić Gligorijević, Nataša Veljković, and Leonid Stoimenov. "GENERATING KNOWLEDGE STRUCTURES FROM OPEN DATASETS' TAGS - AN APPROACH BASED ON FORMAL CONCEPT ANALYSIS." Facta Universitatis, Series: Automatic Control and Robotics 20, no. 1 (April 14, 2021): 021. http://dx.doi.org/10.22190/fuacr201225002b.

Full text
Abstract:
Under influence of data transparency initiatives, a variety of institutions have published a significant number of datasets. In most cases, data publishers take advantage of open data portals (ODPs) for making their datasets publicly available. To improve the datasets' discoverability, open data portals (ODPs) group open datasets into categories using various criteria like publishers, institutions, formats, and descriptions. For these purposes, portals take advantage of metadata accompanying datasets. However, a part of metadata may be missing, or may be incomplete or redundant. Each of these situations makes it difficult for users to find appropriate datasets and obtain the desired information. As the number of available datasets grows, this problem becomes easy to notice. This paper is focused on the first step towards decreasing this problem by implementing knowledge structures to be used in situations where a part of datasets' metadata is missing. In particular, we focus on developing knowledge structures capable of suggesting the best match for the category where an uncategorized dataset should belong to. Our approach relies on dataset descriptions provided by users within dataset tags. We take advantage of a formal concept analysis to reveal the shared conceptualization originating from the tags' usage by developing a concept lattice per each category of open datasets. Since tags represent free text metadata entered by users, in this paper we will present a method of optimizing their usage through means of semantic similarity measures based on natural language processing mechanisms. Finally, we will demonstrate the advantage of our proposal by comparing concept lattices generated using formal the concept analysis before and after the optimization process. The main experimental research results will show that our approach is capable of reducing the number of nodes within a lattice more than 40%.
APA, Harvard, Vancouver, ISO, and other styles
15

Bashina, O. E., N. A. Komkova, L. V. Matraeva, and V. E. Kosolapova. "The Future of International Statistical Data Sharing and New Issues of Interaction." Voprosy statistiki 26, no. 7 (August 1, 2019): 55–66. http://dx.doi.org/10.34023/2313-6383-2019-26-7-55-66.

Full text
Abstract:
The article deals with challenges and prospects of implementation of the Statistical Data and Metadata eXchange (SDMX) standard and using it in the international sharing of statistical data and metadata. The authors identified potential areas where this standard can be used, described a mechanism for data and metadata sharing according to SDMX standard. Major issues classified into three groups - general, statistical, information technology - were outlined by applying both domestic and foreign experience of implementation of the standard. These issues may arise at the national level (if the standard is implemented domestically), at the international level (when the standard is applied by international organizations), and at the national-international level (if the information is exchanged between national statistical data providers and international organizations). General issues arise at the regulatory level and are associated with establishing boundaries of responsibility of counterpart organizations at all three levels of interaction, as well as in terms of increasing the capacity to apply the SDMX standard. Issues of statistical nature are most often encountered due to the sharing of large amounts of data and metadata related to various thematic areas of statistics; there should be a unified structure of data and metadata generation and transmission. With the development of information sharing, arise challenges and issues associated with continuous monitoring and expanding SDMX code lists. At the same time, there is a lack of a universal data structure at the international level and, as a result, it is difficult to understand and apply at the national level the existing data structures developed by international organizations. Challenges of information technology are related to creating an IT infrastructure for data and metadata sharing using the SDMX standard. The IT infrastructure (depending on the participant status) includes the following elements: tools for the receiving organizations, tools for sending organization and the infrastructure for the IT professionals. For each of the outlined issues, the authors formulated some practical recommendations based on the complexity principle as applied to the implementation of the international SDMX standard for the exchange of data and metadata.
APA, Harvard, Vancouver, ISO, and other styles
16

Firdaus Ahmad Fadzil, Ahmad, Zaaba Ahmad, Noor Elaiza Abd Khalid, and Shafaf Ibrahim. "Retinal Fundus Image Blood Vessels Segmentation via Object-Oriented Metadata Structures." International Journal of Engineering & Technology 7, no. 4.33 (December 9, 2018): 110. http://dx.doi.org/10.14419/ijet.v7i4.33.23511.

Full text
Abstract:
Retinal fundus image is a crucial tool for ophthalmologists to diagnose eye-related diseases. These images provide visual information of the interior layer of the retina structures such as optic disc, optic cup, blood vessels and macula that can assist ophthalmologist in determining the health of an eye. Segmentation of blood vessels in fundus images is one of the most fundamental phase in detecting diseases such as diabetic retinopathy. However, the ambiguity of the retina structures in the retinal fundus images presents a challenge for researcher to segment the blood vessels. Extensive pre-processing and training of the images is necessary for precise segmentation, which is very intricate and laborious. This paper proposes the implementation of object-oriented-based metadata (OOM) structures of each pixel in the retinal fundus images. These structures comprise of additional metadata towards the conventional red, green, and blue data for each pixel within the images. The segmentation of the blood vessels in the retinal fundus images are performed by considering these additional metadata that enunciates the location, color spaces, and neighboring pixels of each individual pixel. From the results, it is shown that accurate segmentation of retinal fundus blood vessels can be achieved by purely employing straightforward thresholding method via the OOM structures without extensive pre-processing image processing technique or data training.
APA, Harvard, Vancouver, ISO, and other styles
17

Albalawi, Yahya, Nikola S. Nikolov, and Jim Buckley. "Trustworthy Health-Related Tweets on Social Media in Saudi Arabia: Tweet Metadata Analysis." Journal of Medical Internet Research 21, no. 10 (October 8, 2019): e14731. http://dx.doi.org/10.2196/14731.

Full text
Abstract:
Background Social media platforms play a vital role in the dissemination of health information. However, evidence suggests that a high proportion of Twitter posts (ie, tweets) are not necessarily accurate, and many studies suggest that tweets do not need to be accurate, or at least evidence based, to receive traction. This is a dangerous combination in the sphere of health information. Objective The first objective of this study is to examine health-related tweets originating from Saudi Arabia in terms of their accuracy. The second objective is to find factors that relate to the accuracy and dissemination of these tweets, thereby enabling the identification of ways to enhance the dissemination of accurate tweets. The initial findings from this study and methodological improvements will then be employed in a larger-scale study that will address these issues in more detail. Methods A health lexicon was used to extract health-related tweets using the Twitter application programming interface and the results were further filtered manually. A total of 300 tweets were each labeled by two medical doctors; the doctors agreed that 109 tweets were either accurate or inaccurate. Other measures were taken from these tweets’ metadata to see if there was any relationship between the measures and either the accuracy or the dissemination of the tweets. The entire range of this metadata was analyzed using Python, version 3.6.5 (Python Software Foundation), to answer the research questions posed. Results A total of 34 out of 109 tweets (31.2%) in the dataset used in this study were classified as untrustworthy health information. These came mainly from users with a non-health care background and social media accounts that had no corresponding physical (ie, organization) manifestation. Unsurprisingly, we found that traditionally trusted health sources were more likely to tweet accurate health information than other users. Likewise, these provisional results suggest that tweets posted in the morning are more trustworthy than tweets posted at night, possibly corresponding to official and casual posts, respectively. Our results also suggest that the crowd was quite good at identifying trustworthy information sources, as evidenced by the number of times a tweet’s author was tagged as favorited by the community. Conclusions The results indicate some initially surprising factors that might correlate with the accuracy of tweets and their dissemination. For example, the time a tweet was posted correlated with its accuracy, which may reflect a difference between professional (ie, morning) and hobbyist (ie, evening) tweets. More surprisingly, tweets containing a kashida—a decorative element in Arabic writing used to justify the text within lines—were more likely to be disseminated through retweets. These findings will be further assessed using data analysis techniques on a much larger dataset in future work.
APA, Harvard, Vancouver, ISO, and other styles
18

Bhat, Talapady. "Rule and Root-based Metadata-Ecosystem for Structural Bioinformatics & Facebook." Acta Crystallographica Section A Foundations and Advances 70, a1 (August 5, 2014): C496. http://dx.doi.org/10.1107/s2053273314095035.

Full text
Abstract:
Despite the widespread efforts to develop flexible formats such as PDB, mmCIF, CIF., to store and exchange data, the lack of best practice metadata pose major challenges. Readily adoptable methods with demonstrated usability across multiple solutions to create on-demand metadata are critical for the effective archive and exchange of data in a user-centric fashion. It is important that there exists a metadata-ecosystem where metadata of all structural and biological research evolve synchronously. Previously we described (Chem-BLAST, http://xpdb.nist.gov/chemblast/pdb.pl) a new `root' based concept used in language development (Latin & Sanskrit) to simplify the selection or creation of terms for metadata for millions of chemical structures from the PDB and the PubChem. Subsequently we extended it to text-based data on Cell-image-data (BMC, doi:10.1186/1471-2105-12-487). Here we describe further extension of this concept by creating roots and rules to define an ecosystem for composing new or modifying existing metadata for demonstrated inter-operability. A major focus of the rules is to ensure that the metadata terms are self-explaining (intuitive), highly-reused to describe many experiments and also that they are usable in a federated environment to construct new use-cases. We illustrate the use of this concept to compose semantic terminology for a wide range of disciplines ranging from material science to biology. Examples of the use of such metadata to create demonstrated solutions to describe data on cell-image data will also be presented. I will present ideas and examples to foster discussion on metadata architecture a) that is independent of formats and b) that is better suited for a federated environment c) that could be used readily to build components such as resource description framework (RDF) and Web services for Semantic web.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhdanov, Michael S., Seong Kon Lee, and Ken Yoshioka. "Integral equation method for 3D modeling of electromagnetic fields in complex structures with inhomogeneous background conductivity." GEOPHYSICS 71, no. 6 (November 2006): G333—G345. http://dx.doi.org/10.1190/1.2358403.

Full text
Abstract:
We present a new formulation of the integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) modeling in complex structures with inhomogeneous background conductivity (IBC). This method overcomes the standard limitation of the conventional IE method related to the use of a horizontally layered background only. The new 3D IE EM modeling method still employs the Green’s functions for a horizontally layered 1D model. However, the new method allows us to use an inhomogeneous background with the IE method. We also introduce an approach for accuracy control of the IBC IE method. This new approach provides us with the ability to improve the accuracy of computations by applying the IBC technique iteratively. This approach seems to be extremely useful in computing EM data for multiple geologic models with some common geoelectrical features, like terrain, bathymetry, or other known structures. It may find wide application in an inverse problem solution, where we have to keep some known geologic structures unchanged during the iterative inversion. The method was carefully tested for modeling the EM field for complex structures with a known variable background conductivity. The effectiveness of this approach is illustrated by modeling marine controlled-source electromagnetic (MCSEM) data in the area of Gemini Prospect, Gulf of Mexico.
APA, Harvard, Vancouver, ISO, and other styles
20

Schoenenwald, Alexander, Simon Kern, Josef Viehhauser, and Johannes Schildgen. "Collecting and visualizing data lineage of Spark jobs." Datenbank-Spektrum 21, no. 3 (October 4, 2021): 179–89. http://dx.doi.org/10.1007/s13222-021-00387-7.

Full text
Abstract:
AbstractMetadata management constitutes a key prerequisite for enterprises as they engage in data analytics and governance. Today, however, the context of data is often only manually documented by subject matter experts, and lacks completeness and reliability due to the complex nature of data pipelines. Thus, collecting data lineage—describing the origin, structure, and dependencies of data—in an automated fashion increases quality of provided metadata and reduces manual effort, making it critical for the development and operation of data pipelines. In our practice report, we propose an end-to-end solution that digests lineage via (Py‑)Spark execution plans. We build upon the open-source component Spline, allowing us to reliably consume lineage metadata and identify interdependencies. We map the digested data into an expandable data model, enabling us to extract graph structures for both coarse- and fine-grained data lineage. Lastly, our solution visualizes the extracted data lineage via a modern web app, and integrates with BMW Group’s soon-to-be open-sourced Cloud Data Hub.
APA, Harvard, Vancouver, ISO, and other styles
21

Schoenenwald, Alexander, Simon Kern, Josef Viehhauser, and Johannes Schildgen. "Collecting and visualizing data lineage of Spark jobs." Datenbank-Spektrum 21, no. 3 (October 4, 2021): 179–89. http://dx.doi.org/10.1007/s13222-021-00387-7.

Full text
Abstract:
AbstractMetadata management constitutes a key prerequisite for enterprises as they engage in data analytics and governance. Today, however, the context of data is often only manually documented by subject matter experts, and lacks completeness and reliability due to the complex nature of data pipelines. Thus, collecting data lineage—describing the origin, structure, and dependencies of data—in an automated fashion increases quality of provided metadata and reduces manual effort, making it critical for the development and operation of data pipelines. In our practice report, we propose an end-to-end solution that digests lineage via (Py‑)Spark execution plans. We build upon the open-source component Spline, allowing us to reliably consume lineage metadata and identify interdependencies. We map the digested data into an expandable data model, enabling us to extract graph structures for both coarse- and fine-grained data lineage. Lastly, our solution visualizes the extracted data lineage via a modern web app, and integrates with BMW Group’s soon-to-be open-sourced Cloud Data Hub.
APA, Harvard, Vancouver, ISO, and other styles
22

Atan, Rodziah, and Nur Adila Azram. "A Framework for Halal Knowledge Metadata Representations." Applied Mechanics and Materials 892 (June 2019): 8–15. http://dx.doi.org/10.4028/www.scientific.net/amm.892.8.

Full text
Abstract:
End users and consumers of halal industry are facing difficulties in finding verified halal information. This occurred due to information that is stored in silos at every point of activity for every process chain, employing different structures and models, creating an issue of information verification. Integration of multiple information systems generally aims at combining selected systems so that information can be easily retrieved and manage by users. A proposed five components metadata representation development methodology is presented in this paper so that they form a unified new whole and give users the illusion of interacting with one single information system, therefore, data can be represented using the same abstraction principles (unified global data model and unified semantics) without any physical restructuring.
APA, Harvard, Vancouver, ISO, and other styles
23

Andritsos, Periklis, and Patrick Keilty. "Level-Wise Exploration of Linked and Big Data Guided by Controlled Vocabularies and Folksonomies." Advances in Classification Research Online 24, no. 1 (January 9, 2014): 1. http://dx.doi.org/10.7152/acro.v24i1.14670.

Full text
Abstract:
This paper proposes a level-wise exploration of linked and big data guided by controlled vocabularies and folksonomies. We leverage techniques from both Reconstructability Analysis and cataloging and classification research to provide solutions that will structure and store large amounts of metadata, identify links between data, and explore data structures to produce models that will facilitate effective information retrieval.
APA, Harvard, Vancouver, ISO, and other styles
24

Warin, Thierry. "Global Research on Coronaviruses: Metadata-Based Analysis for Public Health Policies." JMIR Medical Informatics 9, no. 11 (November 30, 2021): e31510. http://dx.doi.org/10.2196/31510.

Full text
Abstract:
Background Within the context of the COVID-19 pandemic, this paper suggests a data science strategy for analyzing global research on coronaviruses. The application of reproducible research principles founded on text-as-data information, open science, the dissemination of scientific data, and easy access to scientific production may aid public health in the fight against the virus. Objective The primary goal of this paper was to use global research on coronaviruses to identify critical elements that can help inform public health policy decisions. We present a data science framework to assist policy makers in implementing cutting-edge data science techniques for the purpose of developing evidence-based public health policies. Methods We used the EpiBibR (epidemiology-based bibliography for R) package to gain access to coronavirus research documents worldwide (N=121,231) and their associated metadata. To analyze these data, we first employed a theoretical framework to group the findings into three categories: conceptual, intellectual, and social. Second, we mapped the results of our analysis in these three dimensions using machine learning techniques (ie, natural language processing) and social network analysis. Results Our findings, firstly, were methodological in nature. They demonstrated the potential for the proposed data science framework to be applied to public health policies. Additionally, our findings indicated that the United States and China were the primary contributors to global coronavirus research during the study period. They also demonstrated that India and Europe were significant contributors, albeit in a secondary position. University collaborations in this domain were strong between the United States, Canada, and the United Kingdom, confirming the country-level findings. Conclusions Our findings argue for a data-driven approach to public health policy, particularly when efficient and relevant research is required. Text mining techniques can assist policy makers in calculating evidence-based indices and informing their decision-making process regarding specific actions necessary for effective health responses.
APA, Harvard, Vancouver, ISO, and other styles
25

KELLY, PAUL H. J., and OLAV BECKMANN. "GENERATIVE AND ADAPTIVE METHODS IN PERFORMANCE PROGRAMMING." Parallel Processing Letters 15, no. 03 (September 2005): 239–55. http://dx.doi.org/10.1142/s0129626405002192.

Full text
Abstract:
Performance programming is characterized by the need to structure software components to exploit the context of use. Relevant context includes the target processor architecture, the available resources (number of processors, network capacity), prevailing resource contention, the values and shapes of input and intermediate data structures, the schedule and distribution of input data delivery, and the way the results are to be used. This paper concerns adapting to dynamic context: adaptive algorithms, malleable and migrating tasks, and application structures based on dynamic component composition. Adaptive computations use metadata associated with software components — performance models, dependence information, data size and shape. Computation itself is interwoven with planning and optimizing the computation process, using this metadata. This reflective nature motivates metaprogramming techniques. We present a research agenda aimed at developing a modelling framework which allows us to characterize both computation and dynamic adaptation in a way that allows systematic optimization.
APA, Harvard, Vancouver, ISO, and other styles
26

Nayak, Stuti, Amrapali Zaveri, Pedro Hernandez Serrano, and Michel Dumontier. "Experience: Automated Prediction of Experimental Metadata from Scientific Publications." Journal of Data and Information Quality 13, no. 4 (December 31, 2021): 1–11. http://dx.doi.org/10.1145/3451219.

Full text
Abstract:
While there exists an abundance of open biomedical data, the lack of high-quality metadata makes it challenging for others to find relevant datasets and to reuse them for another purpose. In particular, metadata are useful to understand the nature and provenance of the data. A common approach to improving the quality of metadata relies on expensive human curation, which itself is time-consuming and also prone to error. Towards improving the quality of metadata, we use scientific publications to automatically predict metadata key:value pairs. For prediction, we use a Convolutional Neural Network (CNN) and a Bidirectional Long-short term memory network (BiLSTM). We focus our attention on the NCBI Disease Corpus, which is used for training the CNN and BiLSTM. We perform two different kinds of experiments with these two architectures: (1) we predict the disease names by using their unique ID in the MeSH ontology and (2) we use the tree structures of MeSH ontology to move up in the hierarchy of these disease terms, which reduces the number of labels. We also perform various multi-label classification techniques for the above-mentioned experiments. We find that in both cases CNN achieves the best results in predicting the superclasses for disease with an accuracy of 83%.
APA, Harvard, Vancouver, ISO, and other styles
27

Chapman, John. "A conversation about linked data in the library and publishing ecosystem." Information Services & Use 40, no. 3 (November 10, 2020): 177–79. http://dx.doi.org/10.3233/isu-200087.

Full text
Abstract:
During the inaugural 2020 NISO+ conference, the “Ask the Experts about… Linked Data” panel included discussion of the transition of library metadata from legacy, record-based models to linked data structures. Panelists John Chapman (OCLC, Inc.) and Philip Schreur (Stanford University) were the speakers; NISO Board of Directors member Mary Sauer-Games (OCLC, Inc.) was the facilitator. The event was an open-ended conversation, with topics driven by questions and comments from the audience.
APA, Harvard, Vancouver, ISO, and other styles
28

Rasmussen, Karsten Boye. "Metadata is key - the most important data after data." IASSIST Quarterly 42, no. 2 (July 18, 2018): 1. http://dx.doi.org/10.29173/iq922.

Full text
Abstract:
Welcome to the second issue of volume 42 of the IASSIST Quarterly (IQ 42:2, 2018). The IASSIST Quarterly has had several papers on many different aspects of the Data Documentation Initiative - for a long time better known by its acronym DDI, without any further explanation. DDI is a brand. The IASSIST Quarterly has also included special issues of collections of papers concerning DDI. Among staff at data archives and data libraries, as well as the users of these facilities, I think we can agree that it is the data that comes first. However, fundamental to all uses of data is the documentation describing the data, without which the data are useless. Therefore, it comes as no surprise that the IASSIST Quarterly is devoted partly to the presentation of papers related to documentation. The question of documentation or data resembles the question of the chicken or the egg. Don't mistake the keys for your car. The metadata and the data belong together and should not be separated. DDI now is a standard, but as with other standards it continues to evolve. The argument about why standards are good comes to mind: 'The nice thing about standards is that you have so many to choose from!'. DDI is the de facto standard for most social science data at data archives and university data libraries. The first paper demonstrates a way to tackle the heterogeneous character of the usage of the DDI. The approach is able to support collaborative questionnaire development as well as export in several formats including the metadata as DDI. The second paper shows how an institutionalized and more general metadata standard - in this case the Belgian Encoded Archival Description (EAD) - is supported by a developed crosswalk from DDI to EAD. However, IQ 42:2 is not a DDI special issue, and the third paper presents an open-source research data management platform called Dendro and a laboratory notebook called LabTablet without mentioning DDI. However, the paper certainly does mention metadata - it is the key to all data. The winner of the paper competition of the IASSIST 2017 conference is presented in this issue. 'Flexible DDI Storage' is authored by Oliver Hopt, Claus-Peter Klas, Alexander Mühlbauer, all affiliated with GESIS - the Leibniz-Institute for the Social Sciences in Germany. The authors argue that the current usage of DDI is heterogeneous and that this results in complex database models for each developed application. The paper shows a new binding of DDI to applications that works independently of most version changes and interpretative differences, thus avoiding continuous reimplementation. The work is based upon their developed DDI-FlatDB approach, which they showed at the European DDI conferences in 2015 and 2016, and which is also described in the paper. Furthermore, a web-based questionnaire editor and application supports large DDI structures and collaborative questionnaire development as well as production of structured metadata for survey institutes and data archives. The paper describes the questionnaire workflow from the start to the export of questionnaire, DDI XML, and SPSS. The development is continuing and it will be published as open source. The second paper is also focused on DDI, now in relation to a new data archive. 'Elaborating a Crosswalk Between Data Documentation Initiative (DDI) and Encoded Archival Description (EAD) for an Emerging Data Archive Service Provider' is by Benjamin Peuch who is a researcher at the State Archives of Belgium. It is expected that the future Belgian data archive will be part of the State Archives, and because DDI is the most widespread metadata standard in the social sciences, the State Archives have developed a DDI-to-EAD crosswalk in order to re-use their EAD infrastructure. The paper shows the conceptual differences between DDI and EAD - both XML based - and how these can be reconciled or avoided for the purpose of a data archive for the social sciences. The author also foresees a fruitful collaboration between traditional archivists and social scientists. The third paper is by a group of scholars connected to the Informatics Engineering Department of University of Porto and the INESC TEC in Portugal. Cristina Ribeiro, João Rocha da Silva, João Aguiar Castro, Ricardo Carvalho Amorim, João Correia Lopes, and Gabriel David are the authors of 'Research Data Management Tools and Workflows: Experimental Work at the University of Porto'. The authors start with the statement that 'Research datasets include all kinds of objects, from web pages to sensor data, and originate in every domain'. The task is to make these data visible, described, preserved, and searchable. The focus is on data preparation, dataset organization and metadata creation. Some groups were proposed a developed open-source research data management platform called Dendro and a laboratory notebook called LabTablet, while other groups that demanded a domain-specific approach had special developed models and applications. All development and metadata modelling have in sight the metadata dissemination. Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors 'deep links' into the IQ as well as deposition of the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com. Authors are very welcome to take a look at the instructions and layout: https://www.iassistquarterly.com/index.php/iassist/about/submissions Authors can also contact me directly via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you. Karsten Boye Rasmussen - June, 2018
APA, Harvard, Vancouver, ISO, and other styles
29

Russell, Pamela H., and Debashis Ghosh. "Radtools: R utilities for convenient extraction of medical image metadata." F1000Research 7 (January 25, 2019): 1976. http://dx.doi.org/10.12688/f1000research.17139.2.

Full text
Abstract:
The radiology community has adopted several widely used standards for medical image files, including the popular DICOM (Digital Imaging and Communication in Medicine) and NIfTI (Neuroimaging Informatics Technology Initiative) standards. These file formats include image intensities as well as potentially extensive metadata. The NIfTI standard specifies a particular set of header fields describing the image and minimal information about the scan. DICOM headers can include any of >4,000 available metadata attributes spanning a variety of topics. NIfTI files contain all slices for an image series, while DICOM files capture single slices and image series are typically organized into a directory. Each DICOM file contains metadata for the image series as well as the individual image slice. The programming environment R is popular for data analysis due to its free and open code, active ecosystem of tools and users, and excellent system of contributed packages. Currently, many published radiological image analyses are performed with proprietary software or custom unpublished scripts. However, R is increasing in popularity in this area due to several packages for processing and analysis of image files. While these R packages handle image import and processing, no existing package makes image metadata conveniently accessible. Extracting image metadata, combining across slices, and converting to useful formats can be prohibitively cumbersome, especially for DICOM files. We present radtools, an R package for convenient extraction of medical image metadata. Radtools provides simple functions to explore and return metadata in familiar R data structures. For convenience, radtools also includes wrappers of existing tools for extraction of pixel data and viewing of image slices. The package is freely available under the MIT license at https://github.com/pamelarussell/radtools and is easily installable from the Comprehensive R Archive Network (https://cran.r-project.org/package=radtools).
APA, Harvard, Vancouver, ISO, and other styles
30

Russell, Pamela H., and Debashis Ghosh. "Radtools: R utilities for convenient extraction of medical image metadata." F1000Research 7 (March 25, 2019): 1976. http://dx.doi.org/10.12688/f1000research.17139.3.

Full text
Abstract:
The radiology community has adopted several widely used standards for medical image files, including the popular DICOM (Digital Imaging and Communication in Medicine) and NIfTI (Neuroimaging Informatics Technology Initiative) standards. These file formats include image intensities as well as potentially extensive metadata. The NIfTI standard specifies a particular set of header fields describing the image and minimal information about the scan. DICOM headers can include any of >4,000 available metadata attributes spanning a variety of topics. NIfTI files contain all slices for an image series, while DICOM files capture single slices and image series are typically organized into a directory. Each DICOM file contains metadata for the image series as well as the individual image slice. The programming environment R is popular for data analysis due to its free and open code, active ecosystem of tools and users, and excellent system of contributed packages. Currently, many published radiological image analyses are performed with proprietary software or custom unpublished scripts. However, R is increasing in popularity in this area due to several packages for processing and analysis of image files. While these R packages handle image import and processing, no existing package makes image metadata conveniently accessible. Extracting image metadata, combining across slices, and converting to useful formats can be prohibitively cumbersome, especially for DICOM files. We present radtools, an R package for convenient extraction of medical image metadata. Radtools provides simple functions to explore and return metadata in familiar R data structures. For convenience, radtools also includes wrappers of existing tools for extraction of pixel data and viewing of image slices. The package is freely available under the MIT license at GitHub and is easily installable from the Comprehensive R Archive Network.
APA, Harvard, Vancouver, ISO, and other styles
31

Alter, George. "Reflections on the Intermediate Data Structure (IDS)." Historical Life Course Studies 10 (March 31, 2021): 71–75. http://dx.doi.org/10.51964/hlcs9570.

Full text
Abstract:
The Intermediate Data Structure (IDS) encourages sharing historical life course data by storing data in a common format. To encompass the complexity of life histories, IDS relies on data structures that are unfamiliar to most social scientists. This article examines four features of IDS that make it flexible and expandable: the Entity-Attribute-Value model, the relational database model, embedded metadata, and the Chronicle file. I also consider IDS from the perspective of current discussions about sharing data across scientific domains. We can find parallels to IDS in other fields that may lead to future innovations.
APA, Harvard, Vancouver, ISO, and other styles
32

Raybould, Matthew I. J., Claire Marks, Alan P. Lewis, Jiye Shi, Alexander Bujotzek, Bruck Taddese, and Charlotte M. Deane. "Thera-SAbDab: the Therapeutic Structural Antibody Database." Nucleic Acids Research 48, no. D1 (September 26, 2019): D383—D388. http://dx.doi.org/10.1093/nar/gkz827.

Full text
Abstract:
Abstract The Therapeutic Structural Antibody Database (Thera-SAbDab; http://opig.stats.ox.ac.uk/webapps/therasabdab) tracks all antibody- and nanobody-related therapeutics recognized by the World Health Organisation (WHO), and identifies any corresponding structures in the Structural Antibody Database (SAbDab) with near-exact or exact variable domain sequence matches. Thera-SAbDab is synchronized with SAbDab to update weekly, reflecting new Protein Data Bank entries and the availability of new sequence data published by the WHO. Each therapeutic summary page lists structural coverage (with links to the appropriate SAbDab entries), alignments showing where any near-matches deviate in sequence, and accompanying metadata, such as intended target and investigated conditions. Thera-SAbDab can be queried by therapeutic name, by a combination of metadata, or by variable domain sequence - returning all therapeutics that are within a specified sequence identity over a specified region of the query. The sequences of all therapeutics listed in Thera-SAbDab (461 unique molecules, as of 5 August 2019) are downloadable as a single file with accompanying metadata.
APA, Harvard, Vancouver, ISO, and other styles
33

Prieto, Mario, Helena Deus, Anita de Waard, Erik Schultes, Beatriz García-Jiménez, and Mark D. Wilkinson. "Data-driven classification of the certainty of scholarly assertions." PeerJ 8 (April 21, 2020): e8871. http://dx.doi.org/10.7717/peerj.8871.

Full text
Abstract:
The grammatical structures scholars use to express their assertions are intended to convey various degrees of certainty or speculation. Prior studies have suggested a variety of categorization systems for scholarly certainty; however, these have not been objectively tested for their validity, particularly with respect to representing the interpretation by the reader, rather than the intention of the author. In this study, we use a series of questionnaires to determine how researchers classify various scholarly assertions, using three distinct certainty classification systems. We find that there are three distinct categories of certainty along a spectrum from high to low. We show that these categories can be detected in an automated manner, using a machine learning model, with a cross-validation accuracy of 89.2% relative to an author-annotated corpus, and 82.2% accuracy against a publicly-annotated corpus. This finding provides an opportunity for contextual metadata related to certainty to be captured as a part of text-mining pipelines, which currently miss these subtle linguistic cues. We provide an exemplar machine-accessible representation—a Nanopublication—where certainty category is embedded as metadata in a formal, ontology-based manner within text-mined scholarly assertions.
APA, Harvard, Vancouver, ISO, and other styles
34

Ribeiro, Cristina, João Rocha da Silva, João Aguiar Castro, Ricardo Carvalho Amorim, João Correia Lopes, and Gabriel David. "Research Data Management Tools and Workflows: Experimental Work at the University of Porto." IASSIST Quarterly 42, no. 2 (July 18, 2018): 1–16. http://dx.doi.org/10.29173/iq925.

Full text
Abstract:
Research datasets include all kinds of objects, from web pages to sensor data, and originate in every domain. Concerns with data generated in large projects and well-funded research areas are centered on their exploration and analysis. For data in the long tail, the main issues are still how to get data visible, satisfactorily described, preserved, and searchable. Our work aims to promote data publication in research institutions, considering that researchers are the core stakeholders and need straightforward workflows, and that multi-disciplinary tools can be designed and adapted to specific areas with a reasonable effort. For small groups with interesting datasets but not much time or funding for data curation, we have to focus on engaging researchers in the process of preparing data for publication, while providing them with measurable outputs. In larger groups, solutions have to be customized to satisfy the requirements of more specific research contexts. We describe our experience at the University of Porto in two lines of enquiry. For the work with long-tail groups we propose general-purpose tools for data description and the interface to multi-disciplinary data repositories. For areas with larger projects and more specific requirements, namely wind infrastructure, sensor data from concrete structures and marine data, we define specialized workflows. In both cases, we present a preliminary evaluation of results and an estimate of the kind of effort required to keep the proposed infrastructures running. The tools available to researchers can be decisive for their commitment. We focus on data preparation, namely on dataset organization and metadata creation. For groups in the long tail, we propose Dendro, an open-source research data management platform, and explore automatic metadata creation with LabTablet, an electronic laboratory notebook. For groups demanding a domain-specific approach, our analysis has resulted in the development of models and applications to organize the data and support some of their use cases. Overall, we have adopted ontologies for metadata modeling, keeping in sight metadata dissemination as Linked Open Data.
APA, Harvard, Vancouver, ISO, and other styles
35

Gouripeddi, Ram, Andrew Miller, Karen Eilbeck, Katherine Sward, and Julio C. Facelli. "3399 Systematically Integrating Microbiomes and Exposomes for Translational Research." Journal of Clinical and Translational Science 3, s1 (March 2019): 29–30. http://dx.doi.org/10.1017/cts.2019.71.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: Characterize microbiome metadata describing specimens collected, genomic pipelines and microbiome results, and incorporate them into a data integration platform for enabling harmonization, integration and assimilation of microbial genomics with exposures as spatiotemporal events. METHODS/STUDY POPULATION: We followed similar methods utilized in previous efforts in charactering and developing metadata models for describing microbiome metadata. Due to the heterogeneity in microbiome and exposome data, we aligned them along a conceptual representation of different data used in translational research; microbiomes being biospecimen-derived, and exposomes being a combination of sensor measurements, surveys and computationally modelled data. We performed a review of literature describing microbiome data, metadata, and semantics [4–15], along with existing datasets [16] and developed an initial metadata model. We reviewed the model with microbiome domain experts for its accuracy and completeness, and with translational researchers for its utility in different studies, and iteratively refined it. We then incorporated the logical model into OpenFurther’s metadata repository MDR [17,18] for harmonization of different microbiome datasets, as well as integration and assimilation of microbiome-exposome events utilizing the UPIE. RESULTS/ANTICIPATED RESULTS: Our model for describing the microbiome currently includes three domains (1) the specimen collected for analysis, (2) the microbial genomics pipelines, and (3) details of the microbiome genomics. For (1), we utilized biospecimen data model that harmonizes the data structures of caTissue, OpenSpecimen and other commonly available specimen management platform. (3) includes details about the organisms, isolate, host specifics, sequencing methodology, genomic sequences and annotations, microbiome phenotype, genomic data and storage, genomic copies and associated times stamps. We then incorporated this logical model into the MDR as assets and associations that UPIE utilizes to harmonize different microbiome datasets, followed by integration and assimilation of microbiome-exposome events. Details of (2) are ongoing. DISCUSSION/SIGNIFICANCE OF IMPACT: The role of the microbiome and co-influences from environmental exposures in etio-pathology of various pulmonary conditions isn’t well understood [19–24]. This metadata model for the microbiome provides a systematic approach for integrating microbial genomics with sensor-based environmental and physiological data, and clinical data that are present in varying spatial and temporal granularities and require complex methods for integration, assimilation and analysis. Incorporation of this microbiome model will advance the performance of sensor-based exposure studies of the (UPIE) to support novel research paradigms that will improve our understanding of the role of microbiome in promoting and preventing airway inflammation by performing a range of hypothesis-driven microbiome-exposome pediatric asthma studies across the translational spectrum.
APA, Harvard, Vancouver, ISO, and other styles
36

Voronin, A. V., G. N. Maltsev, and M. Yu Sokhen. "Data visualization quality in a geographic information system using golden ratio properties." Information and Control Systems, no. 6 (December 18, 2018): 46–57. http://dx.doi.org/10.31799/1684-8853-2018-6-46-57.

Full text
Abstract:
Introduction:Data visualization quality is important for the work of a geographic information system operator, determining the conditions under which he or she makes decisions concerning the displayed data. Visual perception patterns associated with the golden ratio properties allow us to formulate a criterion for data visualization quality which would characterize the possibilities of the operator’s complex perception of the video data displayed on a control device screen in the form of an electronic card.Purpose:Substantiation of a data visualization quality criterion for geoinformation systems using the golden ratio properties, and the study of the conditions for providing good visualization quality for geodata and metadata on a video control device screen in accordance with the proposed criterion.Methods:A formal definition of the data visualization quality criterion in geoinformation systems using the coefficient of the screen area information coverage as an index whose optimal value corresponds to the mathematical definition of the golden ratio; and the study of the properties of this criterion. Results: Based on the conducted analysis of visual perception of video data and golden ratio properties during the data visualization, a criterion is proposed for data visualization quality, which uses the golden ratio properties and characterizes the possibilities of complex perception of video data in an electronic map form by a geographic information system operator. Iteration algorithms for choosing the video data display scale are developed, based on the visualization quality criterion and related to the golden ratio properties. These are the basic algorithm used for each geodata layer represented on the electronicmap, and an algorithm of successive analysis of various layers of the displayed geodata. The choice of a video data display scale in accordance with the developed algorithms can be preliminarily carried out by the system operator using the parameters of standard electronic maps and geodata/metadata sets typical for the current applied problem. We have studied how the scale of the geodata and metadata displayed on an electronic map affects their visualization quality on screens of various sizes. For the considered standard volumes of displayed geodata and metadata, the best visualization quality was achieved when they were displayed on a standard computer monitor, as opposed to a portable notebook or visualization screen.Practical relevance:The proposed criterion and the recommendations for choosing a screen size for the video monitoring device or the structures of the displayed geo-objects and metadata can be used in the design of geoinformation systems, or for preliminary choice of the displayed data structure by a geoinformation system operator.
APA, Harvard, Vancouver, ISO, and other styles
37

Paolo, Tagliolato, Fugazza Cristiano, Oggioni Alessandro, and Carrara Paola. "Semantic Profiles for Easing SensorML Description: Review and Proposal." ISPRS International Journal of Geo-Information 8, no. 8 (July 31, 2019): 340. http://dx.doi.org/10.3390/ijgi8080340.

Full text
Abstract:
The adoption of Sensor Web Enablement (SWE) practices by sensor maintainers is hampered by the inherent complexity of the Sensor Model Language (SensorML), its high expressiveness, and the scarce availability of editing tools. To overcome these issues, the Earth Observation (EO) community often recurs to SensorML profiles narrowing the range of admitted metadata structures and value ranges. Unfortunately, profiles frequently fall short of providing usable editing tools and comprehensive validation criteria, particularly for the difficulty of checking value ranges in the multi-tenanted domain of the Web of Data. In this paper, we provide an updated review of current practices, techniques, and tools for editing SensorML in the perspective of profile support and introduce our solution for effective profile definition. Beside allowing for formalization of a broad range of constraints that concur in defining a metadata profile, our proposal closes the gap between profile definition and actual editing of the corresponding metadata by allowing for ex-ante validation of the metadata that is produced. On this basis, we suggest the notion of Semantic Web SensorML profiles, characterized by a new family of constraints involving Semantic Web sources. We also discuss implementation of SensorML profiles with our tool and pinpoint the benefits with respect to the existing ex-post validation facilities provided by schema definition languages.
APA, Harvard, Vancouver, ISO, and other styles
38

Horský, Vladimír, Veronika Bendová, Dominik Toušek, Jaroslav Koča, and Radka Svobodová. "ValTrendsDB: bringing Protein Data Bank validation information closer to the user." Bioinformatics 35, no. 24 (July 2, 2019): 5389–90. http://dx.doi.org/10.1093/bioinformatics/btz532.

Full text
Abstract:
Abstract Summary Structures in PDB tend to contain errors. This is a very serious issue for authors that rely on such potentially problematic data. The community of structural biologists develops validation methods as countermeasures, which are also included in the PDB deposition system. But how are these validation efforts influencing the structure quality of subsequently published data? Which quality aspects are improving, and which remain problematic? We developed ValTrendsDB, a database that provides the results of an extensive exploratory analysis of relationships between quality criteria, size and metadata of biomacromolecules. Key input data are sourced from PDB. The discovered trends are presented via precomputed information-rich plots. ValTrendsDB also supports the visualization of a set of user-defined structures on top of general quality trends. Therefore, ValTrendsDB enables users to see the quality of structures published by selected author, laboratory or journal, discover quality outliers, etc. ValTrendsDB is updated weekly. Availability and implementation Freely accessible at http://ncbr.muni.cz/ValTrendsDB. The web interface was implemented in JavaScript. The database was implemented in C++. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
39

ZHANG, RUICHONG. "RE-EXAMINATION OF THICKNESS-RESONANCE- FREQUENCY FORMULA FOR STRUCTURAL INTEGRITY APPRAISAL AND DAMAGE DIAGNOSIS." Advances in Adaptive Data Analysis 04, no. 01n02 (April 2012): 1250011. http://dx.doi.org/10.1142/s1793536912500112.

Full text
Abstract:
This study examines rationale of correction factor β in the formula of thickness resonant frequency, fundamental to the thickness estimation of impact-echo (IE) approach in nondestructive testing (NDT) for integrity appraisal and damage diagnosis of infrastructure systems. It shows the role of the factor in the formula from the perspective of testing equipment setup, wave propagation, and resonant frequency identification, much broader than what was first introduced empirically for shape correction of a structure under test. Emphasis is laid in wave-based interpretation of resonant frequency, typically obtained from traditional fast Fourier transform (FFT) data analysis of IE recordings. Since the FFT data analysis provides average, not true, characteristic of resonant frequency shown in the nonstationary IE recordings, it typically distorts the thickness estimation from the formula if the correction factor is not used. An adaptive time-frequency data analysis termed Hilbert–Huang transform (HHT) is then introduced to overcome the shortage of FFT analysis in identifying the resonant frequency from noise-added IE recordings. With FFT and HHT analyses of five data sets of sample IE recordings from sound and damaged concrete structures and comparison with referenced ones, this study reveals that the proposed IE approach with HHT data analysis not only eliminates the subjective use of correction factor in the formula, but it also improves greatly the accuracy in the thickness estimation.
APA, Harvard, Vancouver, ISO, and other styles
40

Hong, Seong Uk, Yong Taeg Lee, Seung Hun Kim, and J. H. Na. "Estimation of Thickness of Concrete Slab Members Using Impact Echo Method." Key Engineering Materials 605 (April 2014): 139–42. http://dx.doi.org/10.4028/www.scientific.net/kem.605.139.

Full text
Abstract:
Recently, the interest in maintenance and repair of existing concrete structures have increased, and it is typical to use non-destructive testing methods such as rebound hardness test or ultrasonic pulse velocity method to execute maintenance and repair of structures efficiently. Many non-destructive testing methods are being used in practice such as at construction sites, but verification for site applications are quite inadequate. Thus, this study intends to evaluate the applicability of Impact Echo Method which is one of the non-destructive testing methods using stress wave. Total of four specimens were planned and produced. The thickness of concrete slab members was estimated using I.E(OLSENs Freedom Data PC with Win.TFS Software Version 2.5.2). The estimated materials of concrete members by IE was found to be IE-1 specimen 178mm, IE-2 specimen 197mm, IE-3 specimen 191mm, and IE-4 specimen 263mm, and the error rate was found to be 4.22%~18.67% (average 9.6%), showing that they are relatively well in agreement. In this study, the experiments were executed with the objective of estimating the thickness of concrete slab members using Impact Echo Method. Through this study, the applicability of thickness estimation in concrete slab members using impact echo method could be confirmed.
APA, Harvard, Vancouver, ISO, and other styles
41

Otter, Martin. "Signal Tables: An Extensible Exchange Format for Simulation Data." Electronics 11, no. 18 (September 6, 2022): 2811. http://dx.doi.org/10.3390/electronics11182811.

Full text
Abstract:
This article introduces Signal Tables as a format to exchange data associated with simulations based on dictionaries and multi-dimensional arrays. Typically, simulation results, as well as model parameters, reference signals, table-based input signals, measurement data, look-up tables, etc., can be represented by a Signal Table. Applications can extend the format to add additional data and metadata/attributes, for example, as needed for a credible simulation process. The format follows a logical view based on a few data structures that can be directly mapped to data structures available in programming languages such as Julia, Python, and Matlab. These data structures can be conveniently used for pre- and post-processing in these languages. A Signal Table can be stored on file by mapping the logical view to available textual or binary persistent file formats, for example, JSON, HDF5, BSON, and MessagePack. A subset of a Signal Table can be imported in traditional tables, for example, in Excel, CSV, pandas, or DataFrames.jl, by flattening multi-dimensional arrays and not storing parameters. The format has been developed and evaluated with the Open Source Julia packages SignalTables.jl and Modia.jl.
APA, Harvard, Vancouver, ISO, and other styles
42

Fischer, Colin, Monika Sester, and Steffen Schön. "Spatio-Temporal Research Data Infrastructure in the Context of Autonomous Driving." ISPRS International Journal of Geo-Information 9, no. 11 (October 25, 2020): 626. http://dx.doi.org/10.3390/ijgi9110626.

Full text
Abstract:
In this paper, we present an implementation of a research data management system that features structured data storage for spatio-temporal experimental data (environmental perception and navigation in the framework of autonomous driving), including metadata management and interfaces for visualization and parallel processing. The demands of the research environment, the design of the system, the organization of the data storage, and computational hardware as well as structures and processes related to data collection, preparation, annotation, and storage are described in detail. We provide examples for the handling of datasets, explaining the required data preparation steps for data storage as well as benefits when using the data in the context of scientific tasks.
APA, Harvard, Vancouver, ISO, and other styles
43

Hong, Seong-Yong, and Sung-Joon Lee. "An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment." Modelling and Simulation in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/651428.

Full text
Abstract:
Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.
APA, Harvard, Vancouver, ISO, and other styles
44

Grabowski, Marek, Karol M. Langner, Marcin Cymborowski, Przemyslaw J. Porebski, Piotr Sroka, Heping Zheng, David R. Cooper, et al. "A public database of macromolecular diffraction experiments." Acta Crystallographica Section D Structural Biology 72, no. 11 (October 28, 2016): 1181–93. http://dx.doi.org/10.1107/s2059798316014716.

Full text
Abstract:
The low reproducibility of published experimental results in many scientific disciplines has recently garnered negative attention in scientific journals and the general media. Public transparency, including the availability of `raw' experimental data, will help to address growing concerns regarding scientific integrity. Macromolecular X-ray crystallography has led the way in requiring the public dissemination of atomic coordinates and a wealth of experimental data, making the field one of the most reproducible in the biological sciences. However, there remains no mandate for public disclosure of the original diffraction data. The Integrated Resource for Reproducibility in Macromolecular Crystallography (IRRMC) has been developed to archive raw data from diffraction experiments and, equally importantly, to provide related metadata. Currently, the database of our resource contains data from 2920 macromolecular diffraction experiments (5767 data sets), accounting for around 3% of all depositions in the Protein Data Bank (PDB), with their corresponding partially curated metadata. IRRMC utilizes distributed storage implemented using a federated architecture of many independent storage servers, which provides both scalability and sustainability. The resource, which is accessibleviathe web portal at http://www.proteindiffraction.org, can be searched using various criteria. All data are available for unrestricted access and download. The resource serves as a proof of concept and demonstrates the feasibility of archiving raw diffraction data and associated metadata from X-ray crystallographic studies of biological macromolecules. The goal is to expand this resource and include data sets that failed to yield X-ray structures in order to facilitate collaborative efforts that will improve protein structure-determination methods and to ensure the availability of `orphan' data left behind for various reasons by individual investigators and/or extinct structural genomics projects.
APA, Harvard, Vancouver, ISO, and other styles
45

Bernstein, Herbert J., Andreas Förster, Asmit Bhowmick, Aaron S. Brewster, Sandor Brockhauser, Luca Gelisio, David R. Hall, et al. "Gold Standard for macromolecular crystallography diffraction data." IUCrJ 7, no. 5 (July 10, 2020): 784–92. http://dx.doi.org/10.1107/s2052252520008672.

Full text
Abstract:
Macromolecular crystallography (MX) is the dominant means of determining the three-dimensional structures of biological macromolecules. Over the last few decades, most MX data have been collected at synchrotron beamlines using a large number of different detectors produced by various manufacturers and taking advantage of various protocols and goniometries. These data came in their own formats: sometimes proprietary, sometimes open. The associated metadata rarely reached the degree of completeness required for data management according to Findability, Accessibility, Interoperability and Reusability (FAIR) principles. Efforts to reuse old data by other investigators or even by the original investigators some time later were often frustrated. In the culmination of an effort dating back more than two decades, a large portion of the research community concerned with high data-rate macromolecular crystallography (HDRMX) has now agreed to an updated specification of data and metadata for diffraction images produced at synchrotron light sources and X-ray free-electron lasers (XFELs). This `Gold Standard' will facilitate the processing of data sets independent of the facility at which they were collected and enable data archiving according to FAIR principles, with a particular focus on interoperability and reusability. This agreed standard builds on the NeXus/HDF5 NXmx application definition and the International Union of Crystallography (IUCr) imgCIF/CBF dictionary, and it is compatible with major data-processing programs and pipelines. Just as with the IUCr CBF/imgCIF standard from which it arose and to which it is tied, the NeXus/HDF5 NXmx Gold Standard application definition is intended to be applicable to all detectors used for crystallography, and all hardware and software developers in the field are encouraged to adopt and contribute to the standard.
APA, Harvard, Vancouver, ISO, and other styles
46

Taufer, Michela, Trilce Estrada, and Travis Johnston. "A survey of algorithms for transforming molecular dynamics data into metadata for in situ analytics based on machine learning methods." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2166 (January 20, 2020): 20190063. http://dx.doi.org/10.1098/rsta.2019.0063.

Full text
Abstract:
This paper presents the survey of three algorithms to transform atomic-level molecular snapshots from molecular dynamics (MD) simulations into metadata representations that are suitable for in situ analytics based on machine learning methods. MD simulations studying the classical time evolution of a molecular system at atomic resolution are widely recognized in the fields of chemistry, material sciences, molecular biology and drug design; these simulations are one of the most common simulations on supercomputers. Next-generation supercomputers will have a dramatically higher performance than current systems, generating more data that needs to be analysed (e.g. in terms of number and length of MD trajectories). In the future, the coordination of data generation and analysis can no longer rely on manual, centralized analysis traditionally performed after the simulation is completed or on current data representations that have been defined for traditional visualization tools. Powerful data preparation phases (i.e. phases in which original row data is transformed to concise and still meaningful representations) will need to proceed data analysis phases. Here, we discuss three algorithms for transforming traditionally used molecular representations into concise and meaningful metadata representations. The transformations can be performed locally. The new metadata can be fed into machine learning methods for runtime in situ analysis of larger MD trajectories supported by high-performance computing. In this paper, we provide an overview of the three algorithms and their use for three different applications: protein–ligand docking in drug design; protein folding simulations; and protein engineering based on analytics of protein functions depending on proteins' three-dimensional structures. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO, and other styles
47

Tureček, František, Libor Brabec, Tomáš Vondrák, Vladimír Hanuš, Josef Hájíček, and Zdeněk Havlas. "Sulfenic acids in the gas phase. Preparation, ionization energies and heats of formation of methane-, ethene-, and benzenesulfenic acid." Collection of Czechoslovak Chemical Communications 53, no. 9 (1988): 2140–58. http://dx.doi.org/10.1135/cccc19882140.

Full text
Abstract:
Methane-, ethene-, and ethynesulfenic acids were generated in the gas phase by flash-vacuum pyrolysis of the corresponding tert-butyl sulfoxides at 400 °C and 10-4 Pa. Benzenesulfenic acid was prepared from phenyl 3-buten-1-yl sulfoxide at 350 °C and 10-4 Pa. The sulfenic acids were characterized by mass spectrometry Threshold ionization energies (IE) were measured as IE(CH3SOH) = 9·07 ± 0·03 eV, IE(CH2=CHSOH) = 8·70 ± 0·03 eV, IE(HCCSOH) = 8·86 ± 0·04 eV, and IE(C6H5SOH) = 8·45 + 0·03 eV. Radical cations [CH3SOH].+, [CH2=CHSOH].+, and [HCCSOH].+ were generated by electron-impact-induced loss of propene from the corresponding propyl sulfoxides and their heats of formation were assessed by appearance energy measurements as 685, 824, and 927 kJ mol-1, respectively. Heats of formation of the neutral sulfenic acids and the S-(O) (C), S-(O) (Cd), S-(O) (Ct) and S-(O) (CB) group equivalents were determined. The experimental data, supported by MNDO calculations, point to sulfenate-like structures (R-S-OH) for the sulfenic acids under study.
APA, Harvard, Vancouver, ISO, and other styles
48

Christie, Michael. "Aboriginal Knowledge Traditions in Digital Environments." Australian Journal of Indigenous Education 34 (2005): 61–66. http://dx.doi.org/10.1017/s1326011100003975.

Full text
Abstract:
AbstractAccording to Manovich (2001), the database and the narrative are natural enemies, each competing for the same territory of human culture. Aboriginal knowledge traditions depend upon narrative through storytelling and other shared performances. The database objectifies and commodifies distillations of such performances and absorbs them into data structures according to a priori assumptions of metadata; that is the data which describes the data to aid a search. In a conventional library for example, the metadata which helps you find a book may be title, author or topic. It is misleading and dangerous to say that these databases contain knowledge, because we lose sight of the embedded, situated, collaborative and performative nature of knowledge. For the assemblages of digital artefacts we find in an archive or database to be useful in the intergenerational transmission of living knowledge traditions, we need to rethink knowledge as performance and data as artefacts of prior knowledge production episodes. Through the metaphors of environment and journey we can explore ways to refigure the archive as a digital environment available as a resource to support the work of active, creative and collaborative knowledge production.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Junwon, Jonghyun Park, Yeonjae Jung, and Jongsik Chun. "AMDB: a database of animal gut microbial communities with manually curated metadata." Nucleic Acids Research 50, no. D1 (November 8, 2021): D729—D735. http://dx.doi.org/10.1093/nar/gkab1009.

Full text
Abstract:
Abstract Variations in gut microbiota can be explained by animal host characteristics, including host phylogeny and diet. However, there are currently no databases that allow for easy exploration of the relationship between gut microbiota and diverse animal hosts. The Animal Microbiome Database (AMDB) is the first database to provide taxonomic profiles of the gut microbiota in various animal species. AMDB contains 2530 amplicon data from 34 projects with manually curated metadata. The total data represent 467 animal species and contain 10 478 bacterial taxa. This novel database provides information regarding gut microbiota structures and the distribution of gut bacteria in animals, with an easy-to-use interface. Interactive visualizations are also available, enabling effective investigation of the relationship between the gut microbiota and animal hosts. AMDB will contribute to a better understanding of the gut microbiota of animals. AMDB is publicly available without login requirements at http://leb.snu.ac.kr/amdb.
APA, Harvard, Vancouver, ISO, and other styles
50

Seylabi, Elnaz Esmaeilzadeh, Eva Agapaki, Dimitris Pitilakis, Scott Brandenberg, Jonathan P. Stewart, and Ertugrul Taciroglu. "Centrifuge Testing of Circular and Rectangular Embedded Structures with Base Excitations." Earthquake Spectra 35, no. 3 (August 2019): 1485–505. http://dx.doi.org/10.1193/110717eqs232dp.

Full text
Abstract:
We present data and metadata from a centrifuge testing program that was designed to investigate the seismic responses of buried circular and rectangular culverts. The specimen configurations were based on Caltrans Standard Plans, and the scope of research was to compare the experimental findings with the design method described in the NCHRP Report 611 as well as to formulate preliminary recommendations for Caltrans practice. A relatively flexible pipe and a stiff box-shaped specimen embedded in dense sand were tested in the centrifuge at the Center for Geotechnical Modeling at University of California, Davis and were subjected to a set of broadband and harmonic input motions. Responses were recorded in the soil and in the embedded structures using a dense array of instruments. Measured quantities included specimen accelerations, bending strains, and hoop strains; soil accelerations, shear-wave velocities, settlements, and lateral displacements; and accelerations of the centrifuge's shaking table. This data paper describes the tests and summarizes the generated data, which are archived at DesignSafe.ci.org (DOI: 10.17603/DS2XW9R) and are accessible through an interactive Jupyter notebook.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography