Journal articles on the topic 'Ecological Metadata Language EML'

To see the other types of publications on this topic, follow the link: Ecological Metadata Language EML.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 journal articles for your research on the topic 'Ecological Metadata Language EML.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fegraus, Eric H., Sandy Andelman, Matthew B. Jones, and Mark Schildhauer. "Maximizing the Value of Ecological Data with Structured Metadata: An Introduction to Ecological Metadata Language (EML) and Principles for Metadata Creation." Bulletin of the Ecological Society of America 86, no. 3 (July 2005): 158–68. http://dx.doi.org/10.1890/0012-9623(2005)86[158:mtvoed]2.0.co;2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gil, Inigo San, Wade Sheldon, Tom Schmidt, Mark Servilla, Raul Aguilar, Corinna Gries, Tanya Gray, et al. "Defining Linkages between the GSC and NSF's LTER Program: How the Ecological Metadata Language (EML) Relates to GCDML and Other Outcomes." OMICS: A Journal of Integrative Biology 12, no. 2 (June 2008): 151–56. http://dx.doi.org/10.1089/omi.2008.0015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sanchez, Fernanda Alves, Fernando Luiz Vechiato, and Silvana Aparecida Borsetti Gregorio Vidotti. "Encontrabilidade da Informação em Repositórios de Dados: uma análise do DataONE." Informação & Informação 24, no. 1 (March 6, 2019): 51. http://dx.doi.org/10.5433/1981-8920.2019v24n1p51.

Full text
Abstract:
Introdução: A importância da disseminação de dados de pesquisa vem sendo cada vez mais debatida pela comunidade científica, principalmente para maximizar o uso e reuso dos dados provenientes de pesquisas científicas. Os Repositórios de Dados visam o armazenamento, a organização, a disseminação, a preservação e o encontro de dados, que potencializam a comunicação e a colaboração científica. Os estudos inclusos no cenário da Ciência da Informação, como a Encontrabilidade da Informação, trazem contribuições para o projeto e implementação de ambientes informacionais digitais como os repositórios. Objetivos: A partir dessa premissa, objetivou-se analisar o repositório de dados DataONE sob o olhar da Encontrabilidade a Informação. Metodologia: Utilizou-se a técnica de observação com apoio de instrumento de avaliação - checklist, que permite a análise de ambientes informacionais a partir dos atributos de Encontrabilidade. Resultados: Como pontos positivos destaca-se o atributo de Metadados com padrão específico para a comunidade de ciência da Terra, a Ecological Metadata Language (EML), desenvolvido pelo software Morpho e o atributo de Responsividade. Como ponto negativo, nota-se a ausência de recursos de Acessibilidade. Conclusão: Conclui-se que, de modo geral, é um ambiente adequado para pesquisa e que o mesmo utiliza atributos de encontrabilidade que potencializam o encontro da informação.
APA, Harvard, Vancouver, ISO, and other styles
4

Gerstner, Eva-Maria, Yvonne Bachmann, Karen Hahn, Anne Mette Lykke, and Marco Schmidt. "The West African Data and Metadata Repository - a long-term data archive for ecological datasets from West Africa." Flora et Vegetatio Sudano-Sambesica 18 (December 16, 2016): 3–10. http://dx.doi.org/10.21248/fvss.18.28.

Full text
Abstract:
Although there is an increasing need for data in ecological studies, many datasets are still lost or not sufficiently visible due to a lack of appropriate data archives. With the West African Data and Metadata Repository, we present a secure long-term archive for a data-poor region allowing detailed documentation by metadata following the EML standard and giving data holders the opportunity to define levels of data access and conditions of use. This article gives an overview of structure, functions and content. The repository is online at the URL http://westafricandata.senckenberg.de.
APA, Harvard, Vancouver, ISO, and other styles
5

Mena-Garcés, Elena, Elena García-Barriocanal, Miguel-Angel Sicilia, and Salvador Sánchez-Alonso. "Moving from dataset metadata to semantics in ecological research: a case in translating EML to OWL." Procedia Computer Science 4 (2011): 1622–30. http://dx.doi.org/10.1016/j.procs.2011.04.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gil, Inigo San, Kristin Vanderbilt, and Steve A. Harrington. "Examples of ecological data synthesis driven by rich metadata, and practical guidelines to use the Ecological Metadata Language specification to this end." International Journal of Metadata, Semantics and Ontologies 6, no. 1 (2011): 46. http://dx.doi.org/10.1504/ijmso.2011.042489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yarmey, Lynn, and Karen S. Baker. "Towards Standardization: A Participatory Framework for Scientific Standard-Making." International Journal of Digital Curation 8, no. 1 (June 14, 2013): 157–72. http://dx.doi.org/10.2218/ijdc.v8i1.252.

Full text
Abstract:
In contemporary scientific research, standard-making and standardization are key processes for the sharing and reuse of data. The goals of this paper are twofold: 1) to stress that collaboration is crucial to standard-making, and 2) to urge recognition of metadata standardization as part of the scientific process. To achieve these goals, a participatory framework for developing and implementing scientific metadata standards is presented. We highlight the need for ongoing, open dialogue within and among research communities at multiple levels. Using the Long Term Ecological Research network adoption of the Ecological Metadata Language as a case example in the natural sciences, we illustrate how a participatory framework addresses the need for active coordination of the evolution of scientific metadata standards. The participatory framework is contrasted with a hierarchical framework to underscore how the development of scientific standards is a dynamic and continuing process. The roles played by ‘best practices’ and ‘working standards’ are identified in relation to the process of standardization.
APA, Harvard, Vancouver, ISO, and other styles
8

Senier, Siobhan. "Dawnland Voices 2.0: Sovereignty and Sustainability Online." Publications of the Modern Language Association of America 131, no. 2 (March 2016): 392–400. http://dx.doi.org/10.1632/pmla.2016.131.2.392.

Full text
Abstract:
Indigenous communities are marrying ecological humanities and digital humanities in ways that productively expand the definition of both terms. On the ecological side, indigenous activism argues for the sustainability and interdependence of the natural and the human. In this, it challenges many of the same things that ecocriticism challenges—the supremacy or distinctiveness of the human, anthropocentric notions of time—though such activism predates ecocriticism quite a bit. Many traditional indigenous narratives assert close affinity, even identity, between a people and their river, for instance, or a people and their animals, or people and trees; they were figuring nonhuman agency long before Bruno Latour. On the DH side, indigenous people are engaging electronic media outside major DH structures and funding. These insurgent engagements challenge the very definition of DH as a field (with its predilection for large-scale archives, metadata, and open access) while also raising questions about the sustainability of the digital itself. Despite the implicit teleologies still assumed by many people—from oral to written to digital—indigenous ecological digital humanities (EcoDH) never present themselves as the end point or answer. Rather, they are part of a vast and diverse communicative ecosystem that includes petroglyphs, living oral traditions, newsletters, wampum, sci-fi novels, baskets, and language apps.
APA, Harvard, Vancouver, ISO, and other styles
9

Pando, Francisco. "Comparison of species information TDWG standards from the point of view of the Plinian Core specification." Biodiversity Information Science and Standards 2 (May 17, 2018): e25869. http://dx.doi.org/10.3897/biss.2.25869.

Full text
Abstract:
Species level information, as an important component of the biodiversity information landscape, is an area where some TDWG standards and activities, coincide. Plinian Core (Plinian Core Task Group 2018) is a generalistic specification that covers aspects such species descriptions and nomenclature, as well as many others (legal, conservation, management, etc.). While the Plinian Core non-biological terms have no counterpart in the TDWG developments, some of its biological ones have, and that is the focus of this work. First, it must be noticed that Plinian Core relies on some TDWG standards for specific facets of species information: Standard: Darwin Core (Darwin Core maintenance group, Biodiversity Information Standards (TDWG) 2014) Elements: taxonConceptID, Hierarchy, MeasurementOrFact, ResourceRelationShip. Standard:Ecological Metadata Language (EML project members 2011) Elements: associatedParty, keywordSet, coverage, dataset Standard:Encyclopedia of Life Schema (EOL Team 2012) Elements: AncillaryData: DataObjectBase Standard:Global Invasive Species Network (GISIN 2008) Elements: origin, presence, persistence, distribution, harmful, modified, startValidDate, endValidDate, countryCode, stateProvince, county, localityName, county, language, citation, abundance... Standard:Taxon Concept Schema. TCS (Taxonomic Names and Concepts interest group 2006) Elements: scientificName Given the direct dependency of Plinian Core for these terms, they do not pose any compatibility or interoperability problem. However, biological descriptions --especially structured ones-- are the object of DELTA (Dallwitz 2006) and the Structured Descriptive Data (SDD) (Hagedorn et al. 2005), and also covered by Plinian Core. This convergence presents overlaps, mismatches and nuances, which discussion is the core of this work. Using some species descriptions as a test case, and transforming them between these standards (Plinian Core, DELTA, and SDD), the strengths and compatibility issues of these specifications are evaluated and discussed. Some operational aspects of Plinian Core in relation to GBIF's IPT (GBIF Secretariat 2016) and the INSPIRE directive (European Commission 2007) are also reviewed.
APA, Harvard, Vancouver, ISO, and other styles
10

Molik, David C., DeAndre Tomlinson, Shane Davitt, Eric L. Morgan, Matthew Sisk, Benjamin Roche, Natalie Meyers, and Michael E. Pfrender. "Combining natural language processing and metabarcoding to reveal pathogen-environment associations." PLOS Neglected Tropical Diseases 15, no. 4 (April 7, 2021): e0008755. http://dx.doi.org/10.1371/journal.pntd.0008755.

Full text
Abstract:
Cryptococcus neoformans is responsible for life-threatening infections that primarily affect immunocompromised individuals and has an estimated worldwide burden of 220,000 new cases each year—with 180,000 resulting deaths—mostly in sub-Saharan Africa. Surprisingly, little is known about the ecological niches occupied by C. neoformans in nature. To expand our understanding of the distribution and ecological associations of this pathogen we implement a Natural Language Processing approach to better describe the niche of C. neoformans. We use a Latent Dirichlet Allocation model to de novo topic model sets of metagenetic research articles written about varied subjects which either explicitly mention, inadvertently find, or fail to find C. neoformans. These articles are all linked to NCBI Sequence Read Archive datasets of 18S ribosomal RNA and/or Internal Transcribed Spacer gene-regions. The number of topics was determined based on the model coherence score, and articles were assigned to the created topics via a Machine Learning approach with a Random Forest algorithm. Our analysis provides support for a previously suggested linkage between C. neoformans and soils associated with decomposing wood. Our approach, using a search of single-locus metagenetic data, gathering papers connected to the datasets, de novo determination of topics, the number of topics, and assignment of articles to the topics, illustrates how such an analysis pipeline can harness large-scale datasets that are published/available but not necessarily fully analyzed, or whose metadata is not harmonized with other studies. Our approach can be applied to a variety of systems to assert potential evidence of environmental associations.
APA, Harvard, Vancouver, ISO, and other styles
11

Moura, Ana Maria de Carvalho, Fabio Porto, Vania Vidal, Regis Pires Magalhães, Macedo Maia, Maira Poltosi, and Daniele Palazzi. "A semantic integration approach to publish and retrieve ecological data." International Journal of Web Information Systems 11, no. 1 (April 20, 2015): 87–119. http://dx.doi.org/10.1108/ijwis-08-2014-0028.

Full text
Abstract:
Purpose – The purpose of this paper is to present a four-level architecture that aims at integrating, publishing and retrieving ecological data making use of linked data (LD). It allows scientists to explore taxonomical, spatial and temporal ecological information, access trophic chain relations between species and complement this information with other data sets published on the Web of data. The development of ecological information repositories is a crucial step to organize and catalog natural reserves. However, they present some challenges regarding their effectiveness to provide a shared and global view of biodiversity data, such as data heterogeneity, lack of metadata standardization and data interoperability. LD rose as an interesting technology to solve some of these challenges. Design/methodology/approach – Ecological data, which is produced and collected from different media resources, is stored in distinct relational databases and published as RDF triples, using a relational-Resource Description Format mapping language. An application ontology reflects a global view of these datasets and share with them the same vocabulary. Scientists specify their data views by selecting their objects of interest in a friendly way. A data view is internally represented as an algebraic scientific workflow that applies data transformation operations to integrate data sources. Findings – Despite of years of investment, data integration continues offering scientists challenges in obtaining consolidated data views of a large number of heterogeneous scientific data sources. The semantic integration approach presented in this paper simplifies this process both in terms of mappings and query answering through data views. Social implications – This work provides knowledge about the Guanabara Bay ecosystem, as well as to be a source of answers to the anthropic and climatic impacts on the bay ecosystem. Additionally, this work will enable evaluating the adequacy of actions that are being taken to clean up Guanabara Bay, regarding the marine ecology. Originality/value – Mapping complexity is traded by the process of generating the exported ontology. The approach reduces the problem of integration to that of mappings between homogeneous ontologies. As a byproduct, data views are easily rewritten into queries over data sources. The architecture is general and although applied to the ecological context, it can be extended to other domains.
APA, Harvard, Vancouver, ISO, and other styles
12

Poyatos, Rafael, Víctor Granda, Víctor Flo, Mark A. Adams, Balázs Adorján, David Aguadé, Marcos P. M. Aidar, et al. "Global transpiration data from sap flow measurements: the SAPFLUXNET database." Earth System Science Data 13, no. 6 (June 14, 2021): 2607–49. http://dx.doi.org/10.5194/essd-13-2607-2021.

Full text
Abstract:
Abstract. Plant transpiration links physiological responses of vegetation to water supply and demand with hydrological, energy, and carbon budgets at the land–atmosphere interface. However, despite being the main land evaporative flux at the global scale, transpiration and its response to environmental drivers are currently not well constrained by observations. Here we introduce the first global compilation of whole-plant transpiration data from sap flow measurements (SAPFLUXNET, https://sapfluxnet.creaf.cat/, last access: 8 June 2021). We harmonized and quality-controlled individual datasets supplied by contributors worldwide in a semi-automatic data workflow implemented in the R programming language. Datasets include sub-daily time series of sap flow and hydrometeorological drivers for one or more growing seasons, as well as metadata on the stand characteristics, plant attributes, and technical details of the measurements. SAPFLUXNET contains 202 globally distributed datasets with sap flow time series for 2714 plants, mostly trees, of 174 species. SAPFLUXNET has a broad bioclimatic coverage, with woodland/shrubland and temperate forest biomes especially well represented (80 % of the datasets). The measurements cover a wide variety of stand structural characteristics and plant sizes. The datasets encompass the period between 1995 and 2018, with 50 % of the datasets being at least 3 years long. Accompanying radiation and vapour pressure deficit data are available for most of the datasets, while on-site soil water content is available for 56 % of the datasets. Many datasets contain data for species that make up 90 % or more of the total stand basal area, allowing the estimation of stand transpiration in diverse ecological settings. SAPFLUXNET adds to existing plant trait datasets, ecosystem flux networks, and remote sensing products to help increase our understanding of plant water use, plant responses to drought, and ecohydrological processes. SAPFLUXNET version 0.1.5 is freely available from the Zenodo repository (https://doi.org/10.5281/zenodo.3971689; Poyatos et al., 2020a). The “sapfluxnetr” R package – designed to access, visualize, and process SAPFLUXNET data – is available from CRAN.
APA, Harvard, Vancouver, ISO, and other styles
13

Batechko, N. G., O. V. Shelimanova, and S. V. Shostak. "Mathematical support of energy efficiency and comfortable conditions in higher education institutions of Ukraine." Energy and automation, no. 3(49) (June 11, 2020): 26–33. http://dx.doi.org/10.31548/energiya2020.03.026.

Full text
Abstract:
ENERGY AND AUTOMATION OPEN JOURNAL SYSTEMS LANGUAGE Select Language English FONT SIZE ABOUT THE AUTHORS N. G. Batechko National University of Life and Environmental sciences of Ukraine O. V. Shelimanova National University of Life and Environmental sciences of Ukraine S.V. Shostak National University of Life and Environmental sciences of Ukraine ARTICLE TOOLS Print this article Indexing metadata How to cite item Finding References Email this article Email the author Journal Help USER You are logged in as... sinyavsky2008 My Journals My Profile Log Out INFORMATION For Readers For Authors For Librarians NOTIFICATIONS View (735 new) Manage Example of bibliographic description The list of journals included in scientometric databases: - Scopus (Uкraine, Belarus, Poland, Russia); - Іndex Copernicus; - Web of Sciense (humanities, natural sciences, social sciences); - РІНЦ. Search algorithm and calculation scientometric indicator: - Scopus; - Publish or Perish; - Google Scholar; - SNIP-іndex journal. SOCIAL NETWORKS HOME ABOUT USER HOME SEARCH CURRENT ARCHIVES STATISTICS REMINDER FOR AUTHORS Home > No 3 (2020) > Batechko MATHEMATICAL SUPPORT OF ENERGY EFFICIENCY AND COMFORTABLE CONDITIONS IN HIGHER EDUCATION INSTITUTIONS OF UKRAINE N. G. Batechko, O. V. Shelimanova, S.V. Shostak ABSTRACT The relevance of increasing the energy efficiency in buildings of domestic higher educational institutions is determined not only by the need to save energy resources, but also by the fact that such “green campuses” can become the basis for the formation of an ecological and energy efficient lifestyle for today's youth. An integrated approach to the selection of energy-saving measures in a building requires the models of thermal comfort which take into account the intensity of human activity, the type of clothing, the speed of air movement in the room, relative humidity and the like. The purpose of this study is to improve the efficiency of the energy system of campus buildings by taking into account the interaction of energy sources, the heating system, the thermal properties of the enclosing structures and the standardized parameters of the indoor microclimate. Along with an integrated approach to the problem under study, taking into account the necessary comprehensive analysis of energy-saving measures in the system "heat source - enclosing structures - external parameters", attention should be paid to the indoor climate and the problem of meeting human needs for thermal comfort. With the help of a miniature temperature datalogger RC-1B, a round-the-clock monitoring of temperatures was carried out in some rooms of the educational building No. 8 of National University of Life and Environmental Sciences of Ukraine during the heating season. The analysis of the experimental data shows that despite the improvement of the thermal accumulative properties of the outer fencing of the building after the implementation of thermal modernization work, the temperatures in the room do not always correspond to the norm. Thus, when implementing energy-saving measures, it is impossible to violate the conditions of comfort in rooms in which thermal equilibrium is maintained in the human body and there is no tension in its thermoregulation system.
APA, Harvard, Vancouver, ISO, and other styles
14

Sananikone, Julien, Elie Arnaud, Olivier Norvez, Sophie Pamerlon, Anne-Sophie Archambeau, and Yvan Le Bras. "From Raw Data to Data Standards through Quality Assessment and Semantic Annotation." Biodiversity Information Science and Standards 6 (August 3, 2022). http://dx.doi.org/10.3897/biss.6.91205.

Full text
Abstract:
Data quality and documentation are at the core of the FAIR (Findable, Accessible, Interoperable, Reusable) principles (Wilkinson et al. 2016). Regarding biodiversity and more broadly ecology domains, complementary solutions of the well-known data standard (notably through Darwin Core (Wieczorek et al. 2012)) orientation are emerging from the intensive use of EML (Ecological Metadata Language (Michener et al. 1997)) metadata standard. These notably capitalize on using: semantic annotation from EML metadata documents that describe data attributes, and FAIR quality assessment as proposed by DataOne (Data Observation Network for Earth) network. semantic annotation from EML metadata documents that describe data attributes, and FAIR quality assessment as proposed by DataOne (Data Observation Network for Earth) network. Here we propose to present this point of view by orchestrating the production of rich (with attributes description and links with terminological resources terms) EML metadata from raw datafiles and, through the generation of FAIR metrics for direct assessment of FAIRness and creation of data standards like Darwin Core. Using EML, we can describe each data attribute (e.g., name, type, unit) and associate each attribute one to several terms coming from terminological resources. Using the Darwin Core vocabulary as a terminological resource, we can thus associate, on the metadata file, original attributes terms to corresponding Darwin Core ones. Then, the data and their metadata files can be processed in order to automatically create the necessary files for a Darwin Core Archive. By acting at the metadata level, associated with accessible raw data files, we can associate raw attribute names to standardized ones, and then, potentially create data standards.
APA, Harvard, Vancouver, ISO, and other styles
15

Gries, Corinna, Stace Beaulieu, Renée Brown, Gastil Gastil-Buhl, Sarah Elmendorf, Hsun-Yi Hsieh, Li Kui, Greg Maurer, and John Porter. "Change in Pictures: Creating best practices in archiving ecological imagery for reuse." Biodiversity Information Science and Standards 4 (September 30, 2020). http://dx.doi.org/10.3897/biss.4.59082.

Full text
Abstract:
The research data repository of the Environmental Data Initiative (EDI) is building on over 30 years of data curation research and experience in the National Science Foundation-funded US Long-Term Ecological Research (LTER) Network. It provides mature functionalities, well established workflows, and now publishes all ‘long-tail’ environmental data. High quality scientific metadata are enforced through automatic checks against community developed rules and the Ecological Metadata Language (EML) standard. Although the EDI repository is far along in making its data findable, accessible, interoperable, and reusable (FAIR), representatives from EDI and the LTER are developing best practices for the edge cases in environmental data publishing. One of these is the vast amount of imagery taken in the context of ecological research, ranging from wildlife camera traps to plankton imaging systems to aerial photography. Many images are used in biodiversity research for community analyses (e.g., individual counts, species cover, biovolume, productivity), while others are taken to study animal behavior and landscape-level change. Some examples from the LTER Network include: using photos of a heron colony to measure provisioning rates for chicks (Clarkson and Erwin 2018) or identifying changes in plant cover and functional type through time (Peters et al. 2020). Multi-spectral images are employed to identify prairie species. Underwater photo quads are used to monitor changes in benthic biodiversity (Edmunds 2015). Sosik et al. (2020) used a continuous Imaging FlowCytobot to identify and measure phyto- and microzooplankton. Cameras at McMurdo Dry Valleys assess snow and ice cover on Antarctic lakes allowing estimation of primary production (Myers 2019). It has been standard practice to publish numerical data extracted from images in EDI; however, the supporting imagery generally has not been made publicly available. Our goal in developing best practices for documenting and archiving these images is for them to be discovered and re-used. Our examples demonstrate several issues. The research questions, and hence, the image subjects are variable. Images frequently come in logical sets of time series. The size of such sets can be large and only some images may be contributed to a dedicated specialized repository. Finally, these images are taken in a larger monitoring context where many other environmental data are collected at the same time and location. Currently, a typical approach to publishing image data in EDI are packages containing compressed (ZIP or tar) files with the images, a directory manifest with additional image-specific metadata, and a package-level EML metadata file. Images in the compressed archive may be organized within directories with filenames corresponding to treatments, locations, time periods, individuals, or other grouping attributes. Additionally, the directory manifest table has columns for each attribute. Package-level metadata include standard coverage elements (e.g., date, time, location) and sampling methods. This approach of archiving logical ‘sets’ of images reduces the effort of providing metadata for each image when most information would be repeated, but at the expense of not making every image individually searchable. The latter may be overcome if the provided manifest contains standard metadata that would allow searching and automatic integration with other images.
APA, Harvard, Vancouver, ISO, and other styles
16

Pando, Francisco, and Francisco Bonet. "Making LTER Data FAIR: A workbench using DEIMS datasets and GBIF Tools." Biodiversity Information Science and Standards 3 (June 19, 2019). http://dx.doi.org/10.3897/biss.3.37257.

Full text
Abstract:
DEIMS-SDR (Dynamic Ecological Information Management System - Site and dataset registry, Wohner et al. 2019) is one of the largest repositories of long-term ecological research (LTER) datasets. It provides sophisticated searching tools by metadata elements and identifiers for all the 930 contained datasets, most of them from European sites. Whereas datasets' metadata are highly structured and searchable, datasets themselves have little standardization in terms of content, identifiers or license, making data integration difficult or cumbersome. Adopting the data FAIR guiding principles(Wilkinson et al. 2016) for LTER data would result in better data integration and reutilization to support knowledge discovery and innovation in ecological research. The Global Biodiversity Information Facility (GBIF 2019a). is the largest repository of species distribution data in the world, providing access to more than a billion records from over 43,000 datasets. GBIF is a good example of FAIR principles implementation: GBIF data is highly standardized, using Darwin Core (Wieczorek et al. 2012) for data and ecological metadata language (EML, Fegraus et al. 2005) for metadata, allowing record-level search; and has implemented globally unique and persistent identifiers for datasets and downloads. Relevant in this context is that GBIF has recently introduced a new data format intended for monitoring projects and sampling event protocols (GBIF 2019b). In this presentation, we explore the suitability of GBIF data formats and workflows to serve LTER datasets, and the work it may take to transform typical LTER datasets into these formats. For this exercise, we use some datasets available via the DEIMS platform, corresponding to the same territory, (Sierra Nevada, Spain (e.g. Bonet 2016, Bonet 2018) and transform them into the GBIF's sample-based Event core publish them in the GBIF data network, and then perform an analysis to assess how the standardized datasets work in practice, both among themselves and also with typical “occurrence-based” GBIF datasets. Finally, we discuss our findings and make recommendations for the GBIF and LTER communities.
APA, Harvard, Vancouver, ISO, and other styles
17

Penev, Lyubomir, Teodor Georgiev, Viktor Senderov, Mariya Dimitrova, and Pavel Stoev. "The Pensoft Data Publishing Workflow: The FAIRway from articles to Linked Open Data." Biodiversity Information Science and Standards 3 (June 13, 2019). http://dx.doi.org/10.3897/biss.3.35902.

Full text
Abstract:
As one of the first advocates of open access and open data in the field of biodiversity publishiing, Pensoft has adopted a multiple data publishing model, resulting in the ARPHA-BioDiv toolbox (Penev et al. 2017). ARPHA-BioDiv consists of several data publishing workflows and tools described in the Strategies and Guidelines for Publishing of Biodiversity Data and elsewhere: Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. The above mentioned approaches are supported by a whole ecosystem of additional workflows and tools, for example: (1) pre-publication data auditing, involving both human and machine data quality checks (workflow 2); (2) web-service integration with data repositories and data centres, such as Global Biodiversity Information Facility (GBIF), Barcode of Life Data Systems (BOLD), Integrated Digitized Biocollections (iDigBio), Data Observation Network for Earth (DataONE), Long Term Ecological Research (LTER), PlutoF, Dryad, and others (workflows 1,2); (3) semantic markup of the article texts in the TaxPub format facilitating further extraction, distribution and re-use of sub-article elements and data (workflows 3,4); (4) server-to-server import of specimen data from GBIF, BOLD, iDigBio and PlutoR into manuscript text (workflow 3); (5) automated conversion of EML metadata into data paper manuscripts (workflow 2); (6) export of Darwin Core Archive and automated deposition in GBIF (workflow 3); (7) submission of individual images and supplementary data under own DOIs to the Biodiversity Literature Repository, BLR (workflows 1-3); (8) conversion of key data elements from TaxPub articles and taxonomic treatments extracted by Plazi into RDF handled by OpenBiodiv (workflow 5). These approaches represent different aspects of the prospective scholarly publishing of biodiversity data, which in a combination with text and data mining (TDM) technologies for legacy literature (PDF) developed by Plazi, lay the ground of an entire data publishing ecosystem for biodiversity, supplying FAIR (Findable, Accessible, Interoperable and Reusable data to several interoperable overarching infrastructures, such as GBIF, BLR, Plazi TreatmentBank, OpenBiodiv and various end users.
APA, Harvard, Vancouver, ISO, and other styles
18

Soares, Filipi, Benildes Maculan, and Debora Drucker. "Darwin Core for Agricultural Biodiversity: A metadata extension proposal." Biodiversity Information Science and Standards 3 (June 13, 2019). http://dx.doi.org/10.3897/biss.3.37053.

Full text
Abstract:
Agricultural Biodiversity has been defined by the Convention on Biological Diversity as the set of elements of biodiversity that are relevant to agriculture and food production. These elements are arranged into an agro-ecosystem that compasses "the variability among living organisms from all sources including terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems" (UNEP 1992). As with any other field in Biology, Agricultural Biodiversity work produces data. In order to publish data in a way it can be efficiently retrieved on web, one must describe it with proper metadata. A metadata element set is a group of statements made about something. These statements have three elements, named subject (thing represented), predicate (space filled up with data) and object (data itself). This representation is called triples. For example, the title is a metadata element. A book is the subject; title is the predicate; and The Chronicles of Narnia is the object. Some metadata standards have been developed to describe biodiversity data, as ABCD Data Schema, Darwin Core (DwC) and Ecological Metadata Language (EML). The DwC is said to be the most used metadata standard to publish data about species occurrence worldwide (Global Biodiversity Information Facility 2019). "Darwin Core is a standard maintained by the Darwin Core maintenance group. It includes a glossary of terms (in other contexts these might be called properties, elements, fields, columns, attributes, or concepts) intended to facilitate the sharing of information about biological diversity by providing identifiers, labels, and definitions. Darwin Core is primarily based on taxa, their occurrence in nature as documented by observations, specimens, samples, and related information" (Biodiversity Information Standards (TDWG) 2014). Within this thematic context, a master research project is in progress at the Federal University of Minas Gerais in partnership with the Brazilian Agricultural Research Corporation (EMBRAPA). It aims to apply the DwC on Brazil’s Agricultural Biodiversity data. A pragmatic analysis of DwC and DwC Extensions demonstrated that important concepts and relations from Agricultural Biodiversity are not represented in DwC elements. For example, DwC does not have significant metadata to describe biological interactions, to convey important information about relations between organisms in an ecological perspective. Pollination is one of the biological interactions relevant to Agricultural Biodiversity, for which we need enhanced metadata. Given these problems, the principles of metadata construction of DwC will be followed in order to develop a metadata extension able to represent data about Agricultural Biodiversity. These principles are the Dublin Core Abstract Model, which present propositions for creating the triples (subject-predicate-object). The standard format of DwC Extensions (see Darwin Core Archive Validator) will be followed to shape the metadata extension. At the end of the research, we expect to present a model of DwC metadata record to publish data about Agricultural Biodiversity in Brazil, including metadata already existent in Simple DwC and the new metadata of Brazil’s Agricultural Biodiversity Metadata Extension. The resulting extension will be useful to represent Agricultural Diversity worldwide.
APA, Harvard, Vancouver, ISO, and other styles
19

Royaux, Coline, Elie Arnaud, Julien Sananikone, Marie Jossé, Mélanie Madelin, Dominique Pelletier, Olivier Norvez, and Yvan Le Bras. "Open Science for Better FAIRness: A biodiversity virtual research environment point of view." Biodiversity Information Science and Standards 6 (September 20, 2022). http://dx.doi.org/10.3897/biss.6.95110.

Full text
Abstract:
"FAIR (Findable, Accessible, Interoperable, Reusable) principles" (Wilkinson et al. 2016) and "open science" are two complementary movements in biodiversity science. Although we need to transition to making scientific data and associated material more FAIR, this does not necessarily imply open data or open source algorithms. Here, based on the experience of the French Biodiversity Data Hub ("Pôle national de données de Biodiversité" - PNDB), which is an e-infrastructure for and by researchers, we want to showcase how focusing on openness can be a strategy to efficiently reach greater FAIRness. Following DataOne guidance, we can build a complete data/metadata ecosystem allowing us to structure heterogeneous environmental information systems. Using the Galaxy analysis platform and its related initiatives (Galaxy training network, European Erasmus+ Gallantries project, bioconda, bioContainer), we can thus illustrate how we can create transparent, peer-reviewed and accessible tools and workflows and collaborative training material. The Galaxy platform also facilitates use of high performance computing infrastructure through notably the European Open Science Cloud marketplace. Finally, through our experiences contributing to open source projects like EML (Ecological Metadata Language (Michener et al. 1997)) Assembly Line, EDI (Environmental Data Initiative, or PAMPA (Indicators of Marine Protected Areas performance for managing coastal ecosystems, resources and their uses), a French platform to help protected areas managers to standardize and analyse their data, we also show how building open source "doors" through the R Shiny programming language to these environments can be beneficial for all.
APA, Harvard, Vancouver, ISO, and other styles
20

Langer, Christian, Néstor Fernández, Luise Quoß, Jose Valdez, Miguel Fernandez, and Henrique Pereira. "Cataloging Essential Biodiversity Variables with the EBV Data Portal." Biodiversity Information Science and Standards 6 (August 23, 2022). http://dx.doi.org/10.3897/biss.6.93593.

Full text
Abstract:
Essential Biodiversity Variables (EBVs) are used to monitor the status and trends in biodiversity at multiple spatiotemporal scales. These provide an abstraction level between raw biodiversity observations and indicators, enabling better access to policy-relevant biodiversity information. Furthermore, the EBV vision aims to support detection of critical change, among other things, with easy to use tools and dashboards accessible to a variety of users and stakeholders. We present the EBV Data Portal, a platform for distributing and visualizing EBV datasets. It contains a geographic cataloging system that supports a large number of spatiotemporal and EBV specific attributes and enables their discoverability. To facilitate user interaction, it offers a web-based interface where users can upload, discover and share essential biodiversity spatiotemporal data through intuitive interaction with cataloging and visualization tools. Using the EBV Catalog, the user can explore the characteristics of the data based on the definition of the EBV Cube standard*1. The Catalog also allows browsing of the description of the metadata in the specifications of the Attribute Convention for Data Discovery (ACDD) and in the Ecological Metadata Language (EML) vocabulary. This enables easy interoperability with other metadata catalogs. An example application is the calculation of summary statistics for selected countries. Using the EBV Data Portal, users can select EBV datasets and calculate basic biodiversity change metrics from spatiotemporal subsets and conveniently visualize complex, multidimensional biodiversity datasets. These visualization and analysis tools of the EBV Data Portal are a first step towards an EBV-based dashboard for biodiversity analyses.
APA, Harvard, Vancouver, ISO, and other styles
21

Gries, Corinna, Mark Servilla, Margaret O'Brien, Kristin Vanderbilt, Colin Smith, Duane Costa, and Susanne Grossman-Clarke. "Achieving FAIR Data Principles at the Environmental Data Initiative, the US-LTER Data Repository." Biodiversity Information Science and Standards 3 (June 18, 2019). http://dx.doi.org/10.3897/biss.3.37047.

Full text
Abstract:
The Environmental Data Initiative (EDI) is a continuation and expansion of the original United Stated Long-Term Ecological Research Program (US-LTER) data repository which went into production in 2013. Building on decades of data management experience in LTER, EDI is addressing the challenge of publishing a diverse corpus of research data (Servilla et al. 2016). EDI’s accomplishments span all aspects of the data curation and publication lifecycle, including repository cyberinfrastructure, outreach and training, and enhancements to data documentation methodologies used by the environmental and ecological research communities. EDI is managing almost 43,000 unique data packages and their revisions from a community of nearly 2,300 individual data authors, most of which are contributed by LTER sites, and are openly accessible and documented with rich science metadata in the Ecological Metadata Language (EML) standard. Here we will present how EDI achieves FAIR data principles (Wilkinson et al. 2016, Stall et al. 2017), and report data use metrics as a measure of success. The FAIR principles serve as benchmarks for EDI’s operation and management: the data we curate are Findable because they reside in an open repository, with unique and persistent digital object identifiers (DOIs) and standard metadata indexed as a searchable resource; they are Accessible through industry standard protocols and are, in most cases, under an open-access license (access control is available if required); Interoperability is achieved by archiving data in commonly used file formats, and both metadata and data are machine readable and accessible; rich, high quality science metadata, with automated congruence and completeness checking, render data fit for Reuse in multiple contexts and environments, along with easily generated data provenance to document their lineage. The success of this approach is proven by the number and spatial and temporal extent of recent re-analyses and synthesis efforts of these data. Although formal data citations are not yet common practice, a Google Scholar search reveals over 400 journal articles crediting data re-use through an EDI DOI. However, despite improved data availability, researchers still report that the largest time investment in synthesis projects is discovering, cleaning and combining primary datasets until all data are completely understood and converted to a similar format. Starting with long-term biodiversity observation data EDI is addressing this issue by implementing a pre-harmonization of thematically similar data sets. Positioned between the data author’s specific data format and larger biodiversity data stores or synthesis projects, this approach allows uniform access without the loss of ancillary information. This pre-harmonization step may be accomplished by data managers because the dataset still contains all original information without any aggregation or science question specific decisions for data omission or cleaning. The data are still distributed into distinct datasets allowing for asynchronous updating of long-term observations. The addition of specific and standardized metadata makes them easily discoverable.
APA, Harvard, Vancouver, ISO, and other styles
22

Kõljalg, Urmas, Kessy Abarenkov, Allan Zirk, Veljo Runnel, Timo Piirmann, Raivo Pöhönen, and Filipp Ivanov. "PlutoF: Biodiversity data management platform for the complete data lifecycle." Biodiversity Information Science and Standards 3 (June 26, 2019). http://dx.doi.org/10.3897/biss.3.37398.

Full text
Abstract:
PlutoF online platform (https://plutof.ut.ee) is built for the management of biodiversity data. The concept is to provide a common workbench where the full data lifecycle can be managed and support seamless data sharing between single users, workgroups and institutions. Today, large and sophisticated biodiversity datasets are increasingly developed and managed by international workgroups. PlutoF's ambition is to serve such collaborative projects as well as to provide data management services to single users, museum or private collections and research institutions. Data management in PlutoF follows a logical order of the data lifecycle Fig. 1. At first, project metadata is uploaded including the project description, data management plan, participants, sampling areas, etc. Data upload and management activities then follow which is often linked to the internal data sharing. Some data analyses can be performed directly in the workbench or data can be exported in standard formats. PlutoF includes also data publishing module. Users can publish their data, generating a citable DOI without datasets leaving PlutoF workbench. PlutoF is part of the DataCite collaboration (https://datacite.org) and so far released more than 600 000 DOIs. Another option is to publish observation or collection datasets via the GBIF (Global Biodiversity Information Facility) portal. A. new feature implemented in 2019 allows users to publish High Throughput Sequencing data as taxon occurrences in GBIF. There is an additional option to send specific datasets directly to the Pensoft online journals. Ultimately, PlutoF works as a data archive which completes the data life cycle. In PlutoF users can manage different data types. Most common types include specimen and living specimen data, nucleotide sequences, human observations, material samples, taxonomic backbones and ecological data. Another important feature is that these data types can be managed as a single datasets or projects. PlutoF follows several biodiversity standards. Examples include Darwin Core, GGBN (Global Genome Biodiversity Network), EML (Ecological Metadata Language), MCL (Microbiological Common Language), and MIxS (Minimum Information about any (x) Sequence).
APA, Harvard, Vancouver, ISO, and other styles
23

Le Bras, Yvan, Aurélie Delavaud, Dominique Pelletier, and Jean-Baptiste Mihoub. "From Raw Biodiversity Data to Indicators, Boosting Products Creation, Integration and Dissemination: French BON FAIR initiatives and related informatics solutions." Biodiversity Information Science and Standards 3 (August 20, 2019). http://dx.doi.org/10.3897/biss.3.39215.

Full text
Abstract:
Most biodiversity research aims at understanding the states and dynamics of biodiversity and ecosystems. To do so, biodiversity research increasingly relies on the use of digital products and services such as raw data archiving systems (e.g. structured databases or data repositories), ready-to-use datasets (e.g. cleaned and harmonized files with normalized measurements or computed trends) as well as associated analytical tools (e.g. model scripts in Github). Several world-wide initiatives facilitate the open access to biodiversity data, such as the Global Biodiversity Information Facility (GBIF) or GenBank, Predicts etc. Although these pave the way towards major advances in biodiversity research, they also typically deliver data products that are sometimes poorly informative as they fail to capture the genuine ecological information they intend to grasp. In other words, access to ready-to-use aggregated data products may sacrifice ecological relevance for data harmonization, resulting in over-simplified, ill-advised standard formats. This is singularly true when the main challenge is to match complementary data (large diversity of measured variables, integration of different levels of life organizations etc.) collected with different requirements and scattered in multiple databases. Improving access to raw data, and meaningful detailed metadata and analytical tools associated with standardized workflows is critical to maintain and maximize the generic relevance of ecological data. Consequently, advancing the design of digital products and services is essential for interoperability while also enhancing reproducibility and transparency in biodiversity research. To go further, a minimal common framework organizing biodiversity observation and data organization is needed. In this regard, the Essential Biodiversity Variable (EBV) concept might be a powerful way to boost progress toward this goal as well as to connect research communities worldwide. As a national Biodiversity Observation Network (BON) node, the French BON is currently embodied by a national research e-infrastructure called "Pôle national de données de biodiversité" (PNDB, formerly ECOSCOPE), aimed at simultaneously empowering the quality of scientific activities and promoting networking within the scientific community at a national level. Through the PNDB, the French BON is working on developing biodiversity data workflows oriented toward end services and products, both from and for a research perspective. More precisely, the two pillars of the PNDB are a metadata portal and a workflow-oriented web platform dedicated to the access of biodiversity data and associated analytical tools (Galaxy-E). After four years of experience, we are now going deeper into metadata specification, dataset descriptions and data structuring through the extensive use of Ecological Metadata Language (EML) as a pivot format. Moreover, we evaluate the relevance of existing tools such as Metacat/Morpho and DEIMS-SDR (Dynamic Ecological Information Management System - Site and dataset registry) in order to ensure a link with other initiatives like Environmental Data Initiative, DataOne and Long-Term Ecological Research related observation networks. Regarding data analysis, an open-source Galaxy-E platform was launched in 2017 as part of a project targeting the design of a citizen science observation system in France (“65 Millions d'observateurs”). Here, we propose to showcase ongoing French activities towards global challenges related to biodiversity information and knowledge dissemination. We particularly emphasize our focus on embracing the FAIR (findable, accessible, interoperable and reusable) data principles Wilkinson et al. 2016 across the development of the French BON e-infrastructure and the promising links we anticipate for operationalizing EBVs. Using accessible and transparent analytical tools, we present the first online platform allowing the performance of advanced yet user-friendly analyses of biodiversity data in a reproducible and shareable way using data from various data sources, such as GBIF, Atlas of Living Australia (ALA), eBIRD, iNaturalist and environmental data such as climate data.
APA, Harvard, Vancouver, ISO, and other styles
24

Penev, Lyubomir. "Data ownership and data publishing." ARPHA Conference Abstracts 2 (August 20, 2019). http://dx.doi.org/10.3897/aca.2.e39250.

Full text
Abstract:
"Data ownership" is actually an oxymoron, because there could not be a copyright (ownership) on facts or ideas, hence no data onwership rights and law exist. The term refers to various kinds of data protection instruments: Intellectual Property Rights (IPR) (mostly copyright) asserted to indicate some kind of data ownership, confidentiality clauses/rules, database right protection (in the European Union only), or personal data protection (GDPR) (Scassa 2018). Data protection is often realised via different mechanisms of "data hoarding", that is witholding access to data for various reasons (Sieber 1989). Data hoarding, however, does not put the data into someone's ownership. Nonetheless, the access to and the re-use of data, and biodiversuty data in particular, is hampered by technical, economic, sociological, legal and other factors, although there should be no formal legal provisions related to copyright that may prevent anyone who needs to use them (Egloff et al. 2014, Egloff et al. 2017, see also the Bouchout Declaration). One of the best ways to provide access to data is to publish these so that the data creators and holders are credited for their efforts. As one of the pioneers in biodiversity data publishing, Pensoft has adopted a multiple-approach data publishing model, resulting in the ARPHA-BioDiv toolbox and in extensive Strategies and Guidelines for Publishing of Biodiversity Data (Penev et al. 2017a, Penev et al. 2017b). ARPHA-BioDiv consists of several data publishing workflows: Deposition of underlying data in an external repository and/or its publication as supplementary file(s) to the related article which are then linked and/or cited in-tex. Supplementary files are published under their own DOIs to increase citability). Description of data in data papers after they have been deposited in trusted repositories and/or as supplementary files; the systme allows for data papers to be submitted both as plain text or converted into manuscripts from Ecological Metadata Language (EML) metadata. Import of structured data into the article text from tables or via web services and their susequent download/distribution from the published article as part of the integrated narrative and data publishing workflow realised by the Biodiversity Data Journal. Publication of data in structured, semanticaly enriched, full-text XMLs where data elements are machine-readable and easy-to-harvest. Extraction of Linked Open Data (LOD) from literature, which is then converted into interoperable RDF triples (in accordance with the OpenBiodiv-O ontology) (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph Deposition of underlying data in an external repository and/or its publication as supplementary file(s) to the related article which are then linked and/or cited in-tex. Supplementary files are published under their own DOIs to increase citability). Description of data in data papers after they have been deposited in trusted repositories and/or as supplementary files; the systme allows for data papers to be submitted both as plain text or converted into manuscripts from Ecological Metadata Language (EML) metadata. Import of structured data into the article text from tables or via web services and their susequent download/distribution from the published article as part of the integrated narrative and data publishing workflow realised by the Biodiversity Data Journal. Publication of data in structured, semanticaly enriched, full-text XMLs where data elements are machine-readable and easy-to-harvest. Extraction of Linked Open Data (LOD) from literature, which is then converted into interoperable RDF triples (in accordance with the OpenBiodiv-O ontology) (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph In combination with text and data mining (TDM) technologies for legacy literature (PDF) developed by Plazi, these approaches show different angles to the future of biodiversity data publishing and, lay the foundations of an entire data publishing ecosystem in the field, while also supplying FAIR (Findable, Accessible, Interoperable and Reusable) data to several interoperable overarching infrastructures, such as Global Biodiversity Information Facility (GBIF), Biodiversity Literature Repository (BLR), Plazi TreatmentBank, OpenBiodiv, as well as to various end users.
APA, Harvard, Vancouver, ISO, and other styles
25

Pereira, Henrique, Néstor Fernández, and Miguel Fernandez. "Essential Biodiversity for Understanding Change." Biodiversity Information Science and Standards 6 (August 1, 2022). http://dx.doi.org/10.3897/biss.6.91108.

Full text
Abstract:
The biodiversity crisis we are experiencing requires more than ever the establishment of an observation and monitoring system to help us understand where we have the greatest problems, to inform actions to help halt and reverse biodiversity loss in those places, and to anticipate the impact of our future actions. The Essential Biodiversity Variable framework concept (EBV; Pereira et al. 2013), was conceptually conceived to provide an analytical framework for biodiversity monitoring, which includes functional, structural, and compositional aspects at different levels of organization, from genes to ecosystems, which, as opposed to reductionist approaches, tries to think of biodiversity from a holistic and systemic viewpoint. After some years of working on refining this concept (e.g., Proença et al. 2017, Kissling et al. 2017, Jetz et al. 2019), in collaboration with the Group on Earth Observations Biodiversity Observation Networks (GEO BON) user community, we have taken the next step, which is precisely to work on grounding this concept through a proposed structure, standard, and tools, which facilitates interoperability and the process of sharing and accessing this information. Our vision for boosting the adoption and use of the EBV framework, named for the different dimensions of the planetary life system that it is intended to capture, is to facilitate access to multiple EBV data products organized under a consistent data structure and with standardized annotation across EBV classes. We propose to leverage the Network Common Data Form (NetCDF) data structure in combination with the Attribute Convention for Data Discovery (ACDD) terms and the Ecological Metadata Language (EML) syntax and adapt them to describe hierarchically organized, spatially explicit, gridded data on biodiversity observations, model predictions, and scenarios using a combination of a web portal and an R package for publication and exploration. We believe that this data structure metadata and supporting tools, initially tested using diverse biodiversity datasets that cover the full scope of the Essential Biodiversity Variables framework, will allow us to better serve the community of users interested in different aspects of global change, to be able to ask questions framed in specific times and spaces in order to obtain, analyze, and visualize the time series registered somewhere on the planet and gain knowledge and insights that will inform decision-making and help us drive and direct development trajectories towards a more sustainable future.
APA, Harvard, Vancouver, ISO, and other styles
26

Hyam, Roger. "International Image Interoperability Framework: A unified approach to sharing images of natural history specimens?" Biodiversity Information Science and Standards 4 (October 5, 2020). http://dx.doi.org/10.3897/biss.4.59056.

Full text
Abstract:
Researchers have become accustomed to online access to data about specimens held in natural history collections. Over several decades, metadata standards have been developed to facilitate the sharing and aggregation of this data, notably Darwin Core and ABCD (Access to Biological Collections Data) developed under the auspices of TDWG but other standards developed in other communities, have also proved useful notably EML (Ecological Metadata Language) and GML (Geography Markup Language). Data aggregators have arisen to both, drive standards development and take advantage of the vast number of records made available through this community effort. Examples include Atlas of Living Australia and spin off Atlas projects, EoL (Encyclopedia of Life), iDigBio, Global Biodiversity Information Facility (GBIF), WFO (World Flora Online). There are still many “dark specimens” that are not visible to the web and efforts continue to digitise metadata on these objects and make them available. The vast majority of the data that has been liberated so far, has therefore been text based and the standards reflect this, although many institutions and projects are also producing large numbers of images and other media. There have been media extensions to some standards to accommodate the sharing of images and other multimedia formats. However, these are restricted to metadata about media objects rather than the exchange of media objects themselves. For example, two extensions to Darwin Core are Audubon Core, which is designed to “determine whether a particular resource or collection will be fit for some particular biodiversity science application before acquiring the media.” and the Simple Multimedia extension, which is a “simple extension for exchanging metadata about multimedia resources”. Therefore image exchange, in particular, has not used open standards. Projects have relied on transferring high resolution versions of images (e.g. submission of type specimen images to JSTOR) or cut down compressed versions (e.g. many herbarium specimens submitted to GBIF or Europeana). The network has not allowed access to high resolution versions of images as curated by the host institutions themselves beyond basic links to web pages. If high resolution images have been published in online catalogues, they have been made available using a hotchpotch of different technologies including the now defunct Java Applets and Adobe Flash player. The network has not supported different views of the same specimen or annotations of those views, or integration of audio and moving images. In an ideal world a researcher should be able to view and annotate images of specimens held across multiple collections in a unified way, and the host institutions should have access to those annotations and statistics on how their specimens are being used. How can we achieve this? The sharing of multimedia representations of objects online is not a problem unique to the biodiversity community. Scholars in museums and archives of all kinds are facing the same issues. In 2011 the British Library, Stanford University, the Bodleian Libraries (Oxford University), the Bibliothèque nationale de France, Nasjonalbiblioteket (National Library of Norway), Los Alamos National Laboratory Research Library, and Cornell University came together to develop an exchange standard called IIIF (International Image Interoperability Framework). This framework now consists of six APIs (Application Programming Interface), four stable and two in beta, to publish and integrate image and other multimedia resources in a uniform manner and has been adopted by many institutions and commercial partners in the digital humanities. Applications based on IIIF enable many of the features desired by biodiversity researchers. The notion of sharing and annotating specimen images is not new to the natural history community. MorphBank, founded in 1998, has grown to allow much of this desirable functionality but at the cost and fragility of being a centralised database. The question we should perhaps be asking is: how can we make the biodiversity data sharing network as a whole more like MorphBank? From 2019 to 2021, part of the EU-funded Synthesys+ programme will support the adoption of IIIF as a unified way to publish images of natural history specimens. We aim to have a set of exemplar institutions publishing IIIF manifests for some millions of specimens by the end of the project and one or more demonstration applications in place. We hope this will act as a catalyst for wider adoption in the natural history community. A key goal is to integrate image data served using IIIF with metadata available via CETAF (Consortium of European Taxonomic Facilities) specimen identifiers. If IIIF were ubiquitous in the natural history community, building tools that implemented this functionality would be feasible. A brief demonstration of a herbarium specimen browser, Herbaria Mundi, will be given. It will illustrate how specimens hosted in different institutions can be manipulated in a single interface. The architecture that supports this behaviour will be explained and its challenges, by implementing the institutions discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Harjes, Janno, Dagmar Triebel, Anton Link, Tanja Weibulat, Frank Oliver Glöckner, and Gerhard Rambold. "FAIR data in meta-omics research: Using the MOD-CO schema to describe structural and operational elements of workflows from field to publication." Biodiversity Information Science and Standards 3 (July 2, 2019). http://dx.doi.org/10.3897/biss.3.37596.

Full text
Abstract:
Nucleic acid and protein sequencing-based analyses are increasingly applied to determine origin, identity and traits of environmental (biological) objects and organisms. In this context, the need for corresponding data structures has become evident. As existing schemas and community standards in the domains of biodiversity and molecular biological research are comparatively limited with regard to the number of generic and specific elements, previous schemas for describing the physical and digital objects need to be replaced or expanded by new elements for covering the requirements from meta-omics techniques and operational details. On the one hand, schemas and standards are hitherto mostly focussed on elements, descriptors, or concepts that are relevant for data exchange and publication, on the other hand, detailed operational aspects regarding origin context and laboratory processing, as well as data management details, like the documentation of physical and digital object identifiers, are rather neglected. The conceptual schema for Meta-omics Data and Collection Objects (MOD-CO; https://www.mod-co.net/) has been set up recently Rambold et al. 2019. It includes design elements (descriptors or concepts), describing structural and operational details along the work- and dataflow from gathering environmental samples to the various transformation, transaction, and measurement steps in the laboratory up to sample and data publication and archiving. The concepts are named according to a multipartite naming structure, describing internal hierarchies and are arranged in concept (sub-)collections. By supporting various kinds of data record relationships, the schema allows for the concatenation of individual records of the operational segments along a workflow (Fig. 1). Thus, it may serve as a logical and structural backbone for laboratory information management systems. The concept structure in version 1.0 comprises 653 descriptors (concepts) and 1,810 predefined descriptor states, organised in 37 concept (sub-)collections. The published version 1.0 is available as various schema representations of identical content (https://www.mod-co.net/wiki/Schema_Representations). A normative XSD (= XML Schema Definition) for the schema version 1.0 is available under http://schema.mod-o.net/MOD-CO_1.0.xsd. The MOD-CO concepts might be integrated as descriptor/element structures in the relational database DiversityDescriptions (DWB-DD) an open-source and freely available software of the Diversity Workbench (DWB; https://diversityworkbench.net/Portal/DiversityDescriptions; https://diversityworkbench.net). Currently, DWB-DD is installed at the Data Center of the Bavarian Natural History Collections (SNSB) to build an instance of its own for storing and maintaining MOD-CO-structured meta-omics research data packages and enrich them with ‘metadata’ elements from the community standards Ecological Markup Language (EML), Minimum Information about any (x) Sequence (MIxS), Darwin Core (DwC) and Access to Biological Collection Data (ABCD). These activities are achieved in the context of ongoing FAIR ('Findable, Accessible, Interoperable and Reuseable') biodiversity research data publishing via the German Federation for Biological Data (GFBio) network (https://www.gfbio.org/). Version 1.1 of the schema with extended collections of structural and operational design concepts is scheduled for 2020.
APA, Harvard, Vancouver, ISO, and other styles
28

Salim, José Augusto, and Antonio Saraiva. "A Google Sheet Add-on for Biodiversity Data Standardization and Sharing." Biodiversity Information Science and Standards 4 (October 2, 2020). http://dx.doi.org/10.3897/biss.4.59228.

Full text
Abstract:
For those biologists and biodiversity data managers who are unfamiliar with information science data practices of data standardization, the use of complex software to assist in the creation of standardized datasets can be a barrier to sharing data. Since the ratification of the Darwin Core Standard (DwC) (Darwin Core Task Group 2009) by the Biodiversity Information Standards (TDWG) in 2009, many datasets have been published and shared through a variety of data portals. In the early stages of biodiversity data sharing, the protocol Distributed Generic Information Retrieval (DiGIR), progenitor of DwC, and later the protocols BioCASe and TDWG Access Protocol for Information Retrieval (TAPIR) (De Giovanni et al. 2010) were introduced for discovery, search and retrieval of distributed data, simplifying data exchange between information systems. Although these protocols are still in use, they are known to be inefficient for transferring large amounts of data (GBIF 2017). Because of that, in 2011 the Global Biodiversity Information Facility (GBIF) introduced the Darwin Core Archive (DwC-A), which allows more efficient data transfer, and has become the preferred format for publishing data in the GBIF network. DwC-A is a structured collection of text files, which makes use of the DwC terms to produce a single, self-contained dataset. Many tools for assisting data sharing using DwC-A have been introduced, such as the Integrated Publishing Toolkit (IPT) (Robertson et al. 2014), the Darwin Core Archive Assistant (GBIF 2010) and the Darwin Core Archive Validator. Despite promoting and facilitating data sharing, many users have difficulties using such tools, mainly because of the lack of training in information science in the biodiversity curriculum (Convention on Biological Diversiity 2012, Enke et al. 2012). However, most users are very familiar with spreadsheets to store and organize their data, but the adoption of the available solutions requires data transformation and training in information science and more specifically, biodiversity informatics. For an example of how spreadsheets can simplify data sharing see Stoev et al. (2016). In order to provide a more "familiar" approach to data sharing using DwC-A, we introduce a new tool as a Google Sheet Add-on. The Add-on, called Darwin Core Archive Assistant Add-on can be installed in the user's Google Account from the G Suite MarketPlace and used in conjunction with the Google Sheets application. The Add-on assists the mapping of spreadsheet columns/fields to DwC terms (Fig. 1), similar to IPT, but with the advantage that it does not require the user to export the spreadsheet and import it into another software. Additionally, the Add-on facilitates the creation of a star schema in accordance with DwC-A, by the definition of a "CORE_ID" (e.g. occurrenceID, eventID, taxonID) field between sheets of a document (Fig. 2). The Add-on also provides an Ecological Metadata Language (EML) (Jones et al. 2019) editor (Fig. 3) with minimal fields to be filled in (i.e., mandatory fields required by IPT), and helps users to generate and share DwC-Archives stored in the user's Google Drive, which can be downloaded as a DwC-A or automatically uploaded to another public storage resource like a user's Zenodo Account (Fig. 4). We expect that the Google Sheet Add-on introduced here, in conjunction with IPT, will promote biodiversity data sharing in a standardized format, as it requires minimal training and simplifies the process of data sharing from the user's perspective, mainly for those users not familiar with IPT, but that historically have worked with spreadsheets. Although the DwC-A generated by the add-on still needs to be published using IPT, it does provide a simpler interface (i.e., spreadsheet) for mapping data sets to DwC than IPT. Even though the IPT includes many more features than the Darwin Core Assistant Add-on, we expect that the Add-on can be a "starting point" for users unfamiliar with biodiversity informatics before they move on to more advanced data publishing tools. On the other hand, Zenodo integration allows users to share and cite their standardized data sets without publishing them via IPT, which can be useful for users without access to an IPT installation. Additionally, we are working on new features and future releases will include the automatic generation of Global Unique Identifiers for shared records, the possibility of adding additional data standards and DwC extensions, integration with GBIF REST API and with IPT REST API.
APA, Harvard, Vancouver, ISO, and other styles
29

Le Bras, Yvan, Laurent Poncet, and Jean-Denis Vigne. "Towards a French National Biodiversity Virtual Research Environment." Biodiversity Information Science and Standards 3 (August 20, 2019). http://dx.doi.org/10.3897/biss.3.39216.

Full text
Abstract:
Research processes in biodiversity are evolving at a rapid pace, particularly regarding data-related steps from collection to analysis. This evolution, mainly due to technological advances, offers equipment that is more powerful and generalizes the digitalization of research data and associated products. It is now urgent to accelerate good practices in scientific data management and analysis in order to offer products and services corresponding to the new context, presenting more and more openness, requiring more and more FAIRness (Wilkinson, M.D. et al. 2016). Using Information and Communication Technology (ICT) as international standards and software (Ecological Metadata Language and associated solutions for metadata management, Galaxy web platform for data analysis), we propose, through the national research e-infrastructure called "Pôle national de données de biodiversité" (or PNDB, formerly ECOSCOPE), to build a new type of Biodiversity Virtual Research Environment (VRE) for French communities. Although deployment of this kind of environment is challenging, it represents an opportunity to pave the way towards better research processes through enhanced collaboration, data management, analysis practices and resources optimization.
APA, Harvard, Vancouver, ISO, and other styles
30

Royaux, Coline, Olivier Norvez, Marie Jossé, Elie Arnaud, Julien Sananikone, Sandrine Pavoine, Dominique Pelletier, Jean-Baptiste Mihoub, and Yvan Le Bras. "From Biodiversity Observation Networks to Datasets and Workflows Supporting Biodiversity Indicators, a French Biodiversity Observation Network (BON) Essential Biodiversity Variables (EBV) Operationalization Pilot using Galaxy and Ecological Metadata Language." Biodiversity Information Science and Standards 6 (September 16, 2022). http://dx.doi.org/10.3897/biss.6.94957.

Full text
Abstract:
Integration of biological data with different ecological scales is complex! The biodiversity community (scientists, policy makers, managers, citizen, NGOs) needs to build a framework of harmonized and interoperable data from raw, heterogeneous and scattered datasets. Such a framework will help observation, measurement and understanding of the spatio-temporal dynamic of biodiversity from local to global scales. One of the most relevant approaches to reach that aim is the concept of Essential Biodiversity Variables (EBV). As we can potentially extract a lot of information from raw datasets sampled at different ecological scales, the EBV concept represents a useful leverage for identifying appropriate data to be collated as well as associated analytical workflows for processing these data. Thanks to FAIR data and source code implementation (Findable, Accessible, Interoperability, Reusable), it is possible to make a transparent assessment of biodiversity by generating operational biodiversity indicators (that can be reused / declined) through the EBV framework, and help designing or improving biodiversity monitoring at various scales. Through the BiodiFAIRse GO FAIR implementation network, we established how ecological and environmental sciences can benefit from existing open standards, tools and platforms used by European, Australian and United States infrastructures, particularly regarding the Galaxy platform for code sources accessiblility and the DataOne network of data catalogs and the Ecological Metadata Language standard for data management. We propose that these implementation choices can help fight the biodiversity crisis by supporting the important mission of GEO BON (Group on Earth Observation Biodiversity Observation Network): “Improve the acquisition, coordination and delivery of biodiversity observations and related services to users including decision makers and the scientific community” (GEO BON 2022).
APA, Harvard, Vancouver, ISO, and other styles
31

Buschbom, Jutta, Breda Zimkus, Andrew Bentley, Mariko Kageyama, Christopher Lyal, Dirk Neumann, Andra Waagmeester, and Alex Hardisty. "Participative Decision Making and the Sharing of Benefits: Laws, ethics, and data protection for building extended global communities." Biodiversity Information Science and Standards 5 (September 14, 2021). http://dx.doi.org/10.3897/biss.5.75168.

Full text
Abstract:
Transdisciplinary and cross-cultural cooperation and collaboration are needed to build extended, densely interconnected information resources. These are the prerequisites for the successful implementation and execution of, for example, an ambitious monitoring framework accompanying the post-2020 Global Biodiversity Framework (GBF) of the Convention on Biological Diversity (CBD; SCBD 2021). Data infrastructures that meet the requirements and preferences of concerned communities can focus and attract community involvement, thereby promoting participatory decision making and the sharing of benefits. Community acceptance, in turn, drives the development of the data resources and data use. Earlier this year, the alliance for biodiversity knowledge (2021a) conducted forum-based consultations seeking community input on designing the next generation of digital specimen representations and consequently enhanced infrastructures. The multitudes of connections that arise from extending the digital specimen representations through linkages in all “directions” will form a powerful network of information for research and application. Yet, with the power of an extended, accessible data network comes the responsibility to protect sensitive information (e.g., the locations of threatened populations, culturally context-sensitive traditional knowledge, or businesses’ fundamental data and infrastructure assets). In addition, existing legislation regulates access and the fair and equitable sharing of benefits. Current negotiations on ‘Digital Sequence Information’ under the CBD suggest such obligations might increase and become more complex in the context of extensible information networks. For example, in the case of data and resources funded by taxpayers in the EU, such access should follow the general principle of being “as open as possible; as closed as is legally necessary” (cp. EC 2016). At the same time, the international regulations of the CBD Nagoya Protocol (SCBD 2011) need to be taken into account. Summarizing main outcomes from the consultation discussions in the forum thread “Meeting legal/regulatory, ethical and sensitive data obligations” (alliance for biodiversity knowledge 2021b), we propose a framework of ten guidelines and functionalities to achieve community building and drive application: Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Balancing the needs for open access, as well as protection, accountability and sustainability, the framework is designed to function as a robust interface between the (research) infrastructure implementing the extensible network of digital specimen representations, and the myriad of applications and operations in the real world. With the legal, ethical and data protection layers of the framework in place, the infrastructure will provide legal clarity and security for data providers and users, specifically in the context of access and benefit sharing under the CBD and its Nagoya Protocol. Forming layers of protection, the characteristics and functionalities of the framework are envisioned to be flexible and finely-grained, adjustable to fulfill the needs and preferences of a wide range of stakeholders and communities, while remaining focused on the protection and rights of the natural world. Respecting different value systems and national policies, the framework is expected to allow a divergence of views to coexist and balance differing interests. Thus, the infrastructure of the digital extensible specimen network is fair and equitable to many providers and users. This foundation has the capacity and potential to bring together the diverse global communities using, managing and protecting biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
32

Vaidya, Gaurav, Hilmar Lapp, and Nico Cellinese. "Enabling Machines to Integrate Biodiversity Data with Evolutionary Knowledge." Biodiversity Information Science and Standards 4 (October 2, 2020). http://dx.doi.org/10.3897/biss.4.59088.

Full text
Abstract:
Most biological data and knowledge are directly or indirectly linked to biological taxa via taxon names. Using taxon names is one of the most fundamental and ubiquitous ways in which a wide range of biological data are integrated, aggregated, and indexed, from genomic and microbial diversity to macro-ecological data. To this day, the names used, as well as most methods and resources developed for this purpose, are drawn from Linnaean nomenclature. This leads to numerous problems when applied to data-intensive science that depends on computation to take full advantage of the vast – and rapidly increasing – amount of available digital biodiversity data. The theoretical and practical complexities of reconciling taxon names and concepts has plagued the systematics community for decades and now more than ever before, Linnaean names based in Linnaean taxonomy, by far the most prevalent means of linking data to taxa, are unfit for the age of computation-driven data science, due to fundamental theoretical and practical shortfalls that cannot be cured. We propose an alternate approach based on the use of phylogenetic clade definitions, which is a well-developed method for unambiguously defining the semantics of a clade concept in terms of shared evolutionary ancestry (de Queiroz and Gauthier 1990, de Queiroz and Gauthier 1994). These semantics allow locating the defined clade on any phylogeny, or showing that a clade is inconsistent with the topology of a given phylogeny and hence cannot be present on it at all. We have built a workflow for defining phylogenetic clade definitions in terms of shared ancestor and excluded lineage properties, and locating these definitions on any input phylogeny. Once these definitions have been located, we can use the list of species found within that clade on that phylogeny in order to aggregate occurrence data from the Global Biodiversity Information Facility (GBIF). Thus, our approach uses clade definitions with machine-understandable semantics to programmatically and reproducibly aggregate biodiversity data by higher-level taxonomic concepts. This approach has several advantages over the use of taxonomic hierarchies: Unlike taxa, the semantics of clade definitions can be expressed in unambiguous, machine-understandable and reproducible terms and language. The resolution of a given clade definition will depend on the phylogeny being used. Thus, if the phylogeny of groups of interest is updated in light of new evolutionary knowledge, the clade definition can be applied to the new phylogeny to obtain an updated list of clade members consistent with the updated evolutionary knowledge. Machine reproducibility of analyses is possible simply by archiving the machine-readable representations of the clade definition and the phylogeny being used. Unlike taxa, the semantics of clade definitions can be expressed in unambiguous, machine-understandable and reproducible terms and language. The resolution of a given clade definition will depend on the phylogeny being used. Thus, if the phylogeny of groups of interest is updated in light of new evolutionary knowledge, the clade definition can be applied to the new phylogeny to obtain an updated list of clade members consistent with the updated evolutionary knowledge. Machine reproducibility of analyses is possible simply by archiving the machine-readable representations of the clade definition and the phylogeny being used. Clade definitions can be created by biologists as needed or can be reused from those published in peer-reviewed journals. In addition, nearly 300 peer-reviewed clade definitions were recently published as part of the Phylonym volume of the PhyloCode (de Queiroz et al. 2020) and are now available on the Regnum website. As part of the Phyloreferencing Project, we digitize this collection as a machine-readable ontology, where each clade is represented as a class defined by logical conjunctions for class membership, corresponding to a set of necessary and sufficient conditions of shared or divergent evolutionary ancestry. We call these classes phyloreferences, and have created a fully automated workflow for digitizing the Regnum database content into an OWL ontology (W3C OWL Working Group 2012) that we call the Clade Ontology. This ontology includes reference phylogenies and additional metadata about the verbatim clade definitions. Once complete, the Clade Ontology will include all clade definitions from RegNum, both those included in Phylonym after passing peer-review, and those contributed by the community, whether or not under the PhyloCode nomenclature. As an openly available community resource, this will allow researchers to use them to aggregate biodiversity data for comparative biology with grouping semantics that are transparent, machine-processable, and reproducible. In our presentation, we will demonstrate the use of phyloreferences to locate clades on the Open Tree of Life synthetic tree (Hinchliff et al. 2015), to retrieve lists of species in each clade, and to use them to find and aggregate occurrence records in GBIF. We will also describe the workflow we are currently using to build and test the Clade Ontology, and describe our plans for publishing this resource. Finally, we will discuss the advantages and disadvantages of this approach as compared to taxonomic checklists.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography