Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Scientific workflow and FAIR protocols.

Zeitschriftenartikel zum Thema „Scientific workflow and FAIR protocols“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Scientific workflow and FAIR protocols" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Celebi, Remzi, Joao Rebelo Moreira, Ahmed A. Hassan, Sandeep Ayyar, Lars Ridder, Tobias Kuhn und Michel Dumontier. „Towards FAIR protocols and workflows: the OpenPREDICT use case“. PeerJ Computer Science 6 (21.09.2020): e281. http://dx.doi.org/10.7717/peerj-cs.281.

Der volle Inhalt der Quelle
Annotation:
It is essential for the advancement of science that researchers share, reuse and reproduce each other’s workflows and protocols. The FAIR principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. The question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. In this paper we describe a two-fold approach of simultaneously applying the FAIR principles to scientific workflows as well as the involved data. We apply and evaluate our approach on the case of the PREDICT workflow, a highly cited drug repurposing workflow. This includes FAIRification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. We propose a semantic model to address these specific requirements and was evaluated by answering competency questions. This semantic model consists of classes and relations from a number of existing ontologies, including Workflow4ever, PROV, EDAM, and BPMN. This allowed us then to formulate and answer new kinds of competency questions. Our evaluation shows the high degree to which our FAIRified OpenPREDICT workflow now adheres to the FAIR principles and the practicality and usefulness of being able to answer our new competency questions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yuen, Denis, Louise Cabansay, Andrew Duncan, Gary Luu, Gregory Hogue, Charles Overbeck, Natalie Perez et al. „The Dockstore: enhancing a community platform for sharing reproducible and accessible computational protocols“. Nucleic Acids Research 49, W1 (12.05.2021): W624—W632. http://dx.doi.org/10.1093/nar/gkab346.

Der volle Inhalt der Quelle
Annotation:
Abstract Dockstore (https://dockstore.org/) is an open source platform for publishing, sharing, and finding bioinformatics tools and workflows. The platform has facilitated large-scale biomedical research collaborations by using cloud technologies to increase the Findability, Accessibility, Interoperability and Reusability (FAIR) of computational resources, thereby promoting the reproducibility of complex bioinformatics analyses. Dockstore supports a variety of source repositories, analysis frameworks, and language technologies to provide a seamless publishing platform for authors to create a centralized catalogue of scientific software. The ready-to-use packaging of hundreds of tools and workflows, combined with the implementation of interoperability standards, enables users to launch analyses across multiple environments. Dockstore is widely used, more than twenty-five high-profile organizations share analysis collections through the platform in a variety of workflow languages, including the Broad Institute's GATK best practice and COVID-19 workflows (WDL), nf-core workflows (Nextflow), the Intergalactic Workflow Commission tools (Galaxy), and workflows from Seven Bridges (CWL) to highlight just a few. Here we describe the improvements made over the last four years, including the expansion of system integrations supporting authors, the addition of collaboration features and analysis platform integrations supporting users, and other enhancements that improve the overall scientific reproducibility of Dockstore content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zulfiqar, Mahnoor, Michael R. Crusoe, Birgitta König-Ries, Christoph Steinbeck, Kristian Peters und Luiz Gadelha. „Implementation of FAIR Practices in Computational Metabolomics Workflows—A Case Study“. Metabolites 14, Nr. 2 (10.02.2024): 118. http://dx.doi.org/10.3390/metabo14020118.

Der volle Inhalt der Quelle
Annotation:
Scientific workflows facilitate the automation of data analysis tasks by integrating various software and tools executed in a particular order. To enable transparency and reusability in workflows, it is essential to implement the FAIR principles. Here, we describe our experiences implementing the FAIR principles for metabolomics workflows using the Metabolome Annotation Workflow (MAW) as a case study. MAW is specified using the Common Workflow Language (CWL), allowing for the subsequent execution of the workflow on different workflow engines. MAW is registered using a CWL description on WorkflowHub. During the submission process on WorkflowHub, a CWL description is used for packaging MAW using the Workflow RO-Crate profile, which includes metadata in Bioschemas. Researchers can use this narrative discussion as a guideline to commence using FAIR practices for their bioinformatics or cheminformatics workflows while incorporating necessary amendments specific to their research area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nayyar, Anand, Rudra Rameshwar und Piyush Kanti Dutta. „Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies“. Scalable Computing: Practice and Experience 20, Nr. 2 (02.05.2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Der volle Inhalt der Quelle
Annotation:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sinaci, A. Anil, Francisco J. Núñez-Benjumea, Mert Gencturk, Malte-Levin Jauer, Thomas Deserno, Catherine Chronaki, Giorgio Cangioli et al. „From Raw Data to FAIR Data: The FAIRification Workflow for Health Research“. Methods of Information in Medicine 59, S 01 (Juni 2020): e21-e32. http://dx.doi.org/10.1055/s-0040-1713684.

Der volle Inhalt der Quelle
Annotation:
Abstract Background FAIR (findability, accessibility, interoperability, and reusability) guiding principles seek the reuse of data and other digital research input, output, and objects (algorithms, tools, and workflows that led to that data) making them findable, accessible, interoperable, and reusable. GO FAIR - a bottom-up, stakeholder driven and self-governed initiative - defined a seven-step FAIRification process focusing on data, but also indicating the required work for metadata. This FAIRification process aims at addressing the translation of raw datasets into FAIR datasets in a general way, without considering specific requirements and challenges that may arise when dealing with some particular types of data. Objectives This scientific contribution addresses the architecture design of an open technological solution built upon the FAIRification process proposed by “GO FAIR” which addresses the identified gaps that such process has when dealing with health datasets. Methods A common FAIRification workflow was developed by applying restrictions on existing steps and introducing new steps for specific requirements of health data. These requirements have been elicited after analyzing the FAIRification workflow from different perspectives: technical barriers, ethical implications, and legal framework. This analysis identified gaps when applying the FAIRification process proposed by GO FAIR to health research data management in terms of data curation, validation, deidentification, versioning, and indexing. Results A technological architecture based on the use of Health Level Seven International (HL7) FHIR (fast health care interoperability resources) resources is proposed to support the revised FAIRification workflow. Discussion Research funding agencies all over the world increasingly demand the application of the FAIR guiding principles to health research output. Existing tools do not fully address the identified needs for health data management. Therefore, researchers may benefit in the coming years from a common framework that supports the proposed FAIRification workflow applied to health datasets. Conclusion Routine health care datasets or data resulting from health research can be FAIRified, shared and reused within the health research community following the proposed FAIRification workflow and implementing technical architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

de Visser, Casper, Lennart F. Johansson, Purva Kulkarni, Hailiang Mei, Pieter Neerincx, K. Joeri van der Velde, Péter Horvatovich et al. „Ten quick tips for building FAIR workflows“. PLOS Computational Biology 19, Nr. 9 (28.09.2023): e1011369. http://dx.doi.org/10.1371/journal.pcbi.1011369.

Der volle Inhalt der Quelle
Annotation:
Research data is accumulating rapidly and with it the challenge of fully reproducible science. As a consequence, implementation of high-quality management of scientific data has become a global priority. The FAIR (Findable, Accesible, Interoperable and Reusable) principles provide practical guidelines for maximizing the value of research data; however, processing data using workflows—systematic executions of a series of computational tools—is equally important for good data management. The FAIR principles have recently been adapted to Research Software (FAIR4RS Principles) to promote the reproducibility and reusability of any type of research software. Here, we propose a set of 10 quick tips, drafted by experienced workflow developers that will help researchers to apply FAIR4RS principles to workflows. The tips have been arranged according to the FAIR acronym, clarifying the purpose of each tip with respect to the FAIR4RS principles. Altogether, these tips can be seen as practical guidelines for workflow developers who aim to contribute to more reproducible and sustainable computational science, aiming to positively impact the open science and FAIR community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Albtoush, Alaa, Farizah Yunus, Khaled Almi’ani und Noor Maizura Mohamad Noor. „Structure-Aware Scheduling Methods for Scientific Workflows in Cloud“. Applied Sciences 13, Nr. 3 (03.02.2023): 1980. http://dx.doi.org/10.3390/app13031980.

Der volle Inhalt der Quelle
Annotation:
Scientific workflows consist of numerous tasks subject to constraints on data dependency. Effective workflow scheduling is perpetually necessary to efficiently utilize the provided resources to minimize workflow execution cost and time (makespan). Accordingly, cloud computing has emerged as a promising platform for scheduling scientific workflows. In this paper, level- and hierarchy-based scheduling approaches were proposed to address the problem of scheduling scientific workflow in the cloud. In the level-based approach, tasks are partitioned into a set of isolated groups in which available virtual machines (VMs) compete to execute the groups’ tasks. Accordingly, based on a utility function, a task will be assigned to the VM that will achieve the highest utility by executing this task. The hierarchy-based approach employs a look-ahead approach, in which the partitioning of the workflow tasks is performed by considering the entire structure of the workflow, whereby the objective is to reduce the data dependency between the obtained groups. Additionally, in the hierarchy-based approach, a fair-share strategy is employed to determine the share (number of VMs) that will be assigned to each group of tasks. Dividing the available VMs based on the computational requirements of the task groups provides the hierarchy-based approach the advantage of further utilizing the VMs usage. The results show that, on average, both approaches improve the execution time and cost by 27% compared to the benchmarked algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mahmoudi, Morteza, Saya Ameli und Sherry Moss. „The urgent need for modification of scientific ranking indexes to facilitate scientific progress and diminish academic bullying“. BioImpacts 10, Nr. 1 (25.09.2019): 5–7. http://dx.doi.org/10.15171/bi.2019.30.

Der volle Inhalt der Quelle
Annotation:
Academic bullying occurs when senior scientists direct abusive behavior such as verbal insults, public shaming, isolation, and threatening toward vulnerable junior colleagues such as postdocs, graduate students and lab members. We believe that one root cause of bullying behavior is the pressure felt by scientists to compete for rankings designed to measure their scientific worth. These ratings, such as the h-index, have several unintended consequences, one of which we believe is academic bullying. Under pressure to achieve higher and higher rankings, in exchange for positive evaluations, grants and recognition, senior scientists exert undue pressure on their junior staff in the form of bullying. Lab members have little or no recourse due to the lack of fair institutional protocols for investigating bullying, dependence on grant or institutional funding, fear of losing time and empirical work by changing labs, and vulnerability to visa cancellation threats among international students. We call for institutions to reconsider their dependence on these over-simplified surrogates for real scientific progress and to provide fair and just protocols that will protect targets of academic bullying from emotional and financial distress.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ammar, Ammar, Serena Bonaretti, Laurent Winckers, Joris Quik, Martine Bakker, Dieter Maier, Iseult Lynch, Jeaphianne van Rijn und Egon Willighagen. „A Semi-Automated Workflow for FAIR Maturity Indicators in the Life Sciences“. Nanomaterials 10, Nr. 10 (20.10.2020): 2068. http://dx.doi.org/10.3390/nano10102068.

Der volle Inhalt der Quelle
Annotation:
Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ayoubi, Doaa. „Investigational drugs services pharmacists education and workflow structure.“ JCO Global Oncology 9, Supplement_1 (August 2023): 169. http://dx.doi.org/10.1200/go.2023.9.supplement_1.169.

Der volle Inhalt der Quelle
Annotation:
169 Background: Per NIH U.S. National Library of Medicine, 428,103 research studies are registered globally as of September 19th, 2022. Each institution must assess its institutional readiness as the complexity and volume of clinical trial increases. Institutional readiness helps assess the capacity of an institution to adopt new technologies and policies to accept, activate, and adjust research despite the complexity of the clinical trial. Clinical research pharmacists help improve the safety and quality of the research by reviewing the scientific literature and medication-related information to develop protocols and evaluate clinical trial feasibility as part of the scientific review committee while performing supportive pharmaceutical review of the protocol, preparation, storage, dispensing, and consulting clinical coordinators and sponsors on the logistics of the trial. Investigational drug service (IDS) also oversees institutional compliance with Good Clinical Practices (GCPs) and Good Manufacturing Practices (GMPs), FDA regulations, and laws. Lack of training for pharmacists, poor safety and quality control, lack of resources, and unstandardized approach to the management of investigational drug products can lead to discrepancy, delay in care, and ultimately, harm to the patient. This study will address the deficiency of institutional readiness to conduct clinical trials with investigational agents and introduce policies and procedures needed to be followed by pharmacists or pharmacy technicians to conduct research. Methods: To explore the challenges in clinical trials and compare work efficiency, work productivity, and growth in the investigational drug service (IDS) pharmacy. This study is conducted at a single-health system compromised of 7 satellite investigational pharmacy from June 2021 to August 2022. Interventions include Epic/Beacon training, CITI training, annual compounding competencies, protocol and procedure development, and Vestigo implementation. Data for inclusion was identified from (where Dr. Ayoubi got data from) were extracted and analyzed. Results: 71% decrease in # of protocols on the priority list to amend protocols. Informatics pharmacists build 190 protocols in one year. Revenue has doubled in one year. Productivity increased as time to verify is reduced by 9.43 minutes to verify an order and time to dispense is reduced by 31.3 min. Number of clinical trials designated to each pharmacist increased from 3-4 to 20 productions per month. Number of investigational drugs ordered increased by 89 last year. Conclusions: Education and Training remain one of the main set back to clinical trials growth, and proper training and education are the pillars to conducting clinical trials in a safe manner and promote growth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Velterop, Jan, und Erik Schultes. „An Academic Publishers’ GO FAIR Implementation Network (APIN)“. Information Services & Use 40, Nr. 4 (06.01.2021): 333–41. http://dx.doi.org/10.3233/isu-200102.

Der volle Inhalt der Quelle
Annotation:
Presented here is a proposal for the academic publishing industry to get actively involved in the formulation of protocols and standards that make published scientific research material machine-readable in order to facilitate data to be findable, accessible, interoperable, and re-usable (FAIR). Given the importance of traditional journal publications in scholarly communication worldwide, active involvement of academic publishers in advancing the more routine creation and reuse of FAIR data is highly desired.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Siew, C. B., und A. Abdul Rahman. „3D STREAMING PROTOCOLS FOR SPATIAL DATA INFRASTRUCTURE: A BRIEF REVIEW“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W1 (29.09.2016): 23–24. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w1-23-2016.

Der volle Inhalt der Quelle
Annotation:
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial 3D graphics rendering has been a topic of interest among scientific domain from both computer science and geospatial science. Different methods were proposed and discussed in these researches for different domains and applications. Each method provides advantages and trade-offs. Some methods proposed image based rendering for 3D graphics and ultimately. This paper attempts to discuss several techniques from past researches and attempts to propose another method inspired from these techniques, customized for 3D SDI its data workflow use cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Alvarez-Romero, Celia, Alicia Martínez-García, A. Anil Sinaci, Mert Gencturk, Eva Méndez, Tony Hernández-Pérez, Rosa Liperoti et al. „FAIR4Health: Findable, Accessible, Interoperable and Reusable data to foster Health Research“. Open Research Europe 2 (31.05.2022): 34. http://dx.doi.org/10.12688/openreseurope.14349.2.

Der volle Inhalt der Quelle
Annotation:
Due to the nature of health data, its sharing and reuse for research are limited by ethical, legal and technical barriers. The FAIR4Health project facilitated and promoted the application of FAIR principles in health research data, derived from the publicly funded health research initiatives to make them Findable, Accessible, Interoperable, and Reusable (FAIR). To confirm the feasibility of the FAIR4Health solution, we performed two pathfinder case studies to carry out federated machine learning algorithms on FAIRified datasets from five health research organizations. The case studies demonstrated the potential impact of the developed FAIR4Health solution on health outcomes and social care research. Finally, we promoted the FAIRified data to share and reuse in the European Union Health Research community, defining an effective EU-wide strategy for the use of FAIR principles in health research and preparing the ground for a roadmap for health research institutions. This scientific report presents a general overview of the FAIR4Health solution: from the FAIRification workflow design to translate raw data/metadata to FAIR data/metadata in the health research domain to the FAIR4Health demonstrators’ performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Alvarez-Romero, Celia, Alicia Martínez-García, A. Anil Sinaci, Mert Gencturk, Eva Méndez, Tony Hernández-Pérez, Rosa Liperoti et al. „FAIR4Health: Findable, Accessible, Interoperable and Reusable data to foster Health Research“. Open Research Europe 2 (09.03.2022): 34. http://dx.doi.org/10.12688/openreseurope.14349.1.

Der volle Inhalt der Quelle
Annotation:
Due to the nature of health data, its sharing and reuse for research are limited by ethical, legal and technical barriers. The FAIR4Health project facilitated and promoted the application of FAIR principles in health research data, derived from the publicly funded health research initiatives to make them Findable, Accessible, Interoperable, and Reusable (FAIR). To confirm the feasibility of the FAIR4Health solution, we performed two pathfinder case studies to carry out federated machine learning algorithms on FAIRified datasets from five health research organizations. The case studies demonstrated the potential impact of the developed FAIR4Health solution on health outcomes and social care research. Finally, we promoted the FAIRified data to share and reuse in the European Union Health Research community, defining an effective EU-wide strategy for the use of FAIR principles in health research and preparing the ground for a roadmap for health research institutions to offer access to certified FAIR datasets. This scientific report presents a general overview of the FAIR4Health solution: from the FAIRification workflow design to translate raw data/metadata to FAIR data/metadata in the health research domain to the FAIR4Health demonstrators’ performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Griswold, Michael, James Henegan, Chad Blackshear, Ann Moore, Thomas Smith, Deric McGowan, Eleanor Simonsick und Luigi Ferrucci. „FAIR SHARING OF THE BLSA DATA ECOSYSTEM“. Innovation in Aging 6, Supplement_1 (01.11.2022): 311. http://dx.doi.org/10.1093/geroni/igac059.1231.

Der volle Inhalt der Quelle
Annotation:
Abstract The BLSA is an invaluable resource for the study of human aging with uniquely rich measures taken longitudinally on a continuous replenishment cohort since 1958. Tremendous interest has been expressed by the United States and international research communities in expanded access to and use of BLSA data to address emerging scientific questions. Here, we will describe recent study leadership initiatives into: (1) making the BLSA data more FAIR (Findable, Accessible, Interoperable, and Reusable), (2) growing a BLSA data ecosystem that enables scalable sharing, and (3) leveraging and developing platforms for accessing the ecosystem. Lastly, we will demonstrate current platform functions that empower researchers to find and access BLSA data, metadata, protocols, code and supporting materials to make new discoveries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

GUIDOTI, MARCUS, FELIPE LORENZ SIMÕES, TATIANA PETERSEN RUSCHEL, VALDENAR DA ROSA GONÇALVES, CAROLINA SOKOLOWICZ und DONAT AGOSTI. „Using taxonomic treatments to assess an author’s career: the impactful Jocélia Grazia“. Zootaxa 4958, Nr. 1 (14.04.2021): 12–33. http://dx.doi.org/10.11646/zootaxa.4958.1.4.

Der volle Inhalt der Quelle
Annotation:
Here we present a descriptive analysis of the bibliographic production of the world-renowned heteropterist Dr. Jocélia Grazia and comments on her taxonomic reach based on extracted taxonomic treatments. We analyzed a total of 219 published documents, including scientific papers, scientific notes, and book chapters. Additionally, we applied the Plazi workflow to extract taxonomic treatments, images, tables, treatment citations and materials citations, and references from 75 different documents in accordance with the FAIR (Findability, Accessibility, Interoperability, and Reuse) principles and made them available on the Biodiversity Literature Repository (BLR), hosted on Zenodo, and on TreatmentBank. We found that Dr. Grazia published 200 new names, including species (183) and genera (17), and 1,444 taxonomic treatments in total. From these, 104 and 581, respectively, were extracted after applying the Plazi Workflow. A total of 544 figures, 50 tables, 2,242 references, 2,107 materials citations, and 1,101 treatment citations were also extracted. In order to make her publications properly citable and accessible, we assigned DOIs (Digital Object Identifiers) for all publications that lacked this persistent identifier, including those that were not processed (88 in total), therefore enhancing the open-access share of her publications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Kochev, Nikolay, Nina Jeliazkova, Vesselina Paskaleva, Gergana Tancheva, Luchesar Iliev, Peter Ritchie und Vedrin Jeliazkov. „Your Spreadsheets Can Be FAIR: A Tool and FAIRification Workflow for the eNanoMapper Database“. Nanomaterials 10, Nr. 10 (24.09.2020): 1908. http://dx.doi.org/10.3390/nano10101908.

Der volle Inhalt der Quelle
Annotation:
The field of nanoinformatics is rapidly developing and provides data driven solutions in the area of nanomaterials (NM) safety. Safe by Design approaches are encouraged and promoted through regulatory initiatives and multiple scientific projects. Experimental data is at the core of nanoinformatics processing workflows for risk assessment. The nanosafety data is predominantly recorded in Excel spreadsheet files. Although the spreadsheets are quite convenient for the experimentalists, they also pose great challenges for the consequent processing into databases due to variability of the templates used, specific details provided by each laboratory and the need for proper metadata documentation and formatting. In this paper, we present a workflow to facilitate the conversion of spreadsheets into a FAIR (Findable, Accessible, Interoperable, and Reusable) database, with the pivotal aid of the NMDataParser tool, developed to streamline the mapping of the original file layout into the eNanoMapper semantic data model. The NMDataParser is an open source Java library and application, making use of a JSON configuration to define the mapping. We describe the JSON configuration syntax and the approaches applied for parsing different spreadsheet layouts used by the nanosafety community. Examples of using the NMDataParser tool in nanoinformatics workflows are given. Challenging cases are discussed and appropriate solutions are proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Hernandez, Mikel, Gorka Epelde, Andoni Beristain, Roberto Álvarez, Cristina Molina, Xabat Larrea, Ane Alberdi, Michalis Timoleon, Panagiotis Bamidis und Evdokimos Konstantinidis. „Incorporation of Synthetic Data Generation Techniques within a Controlled Data Processing Workflow in the Health and Wellbeing Domain“. Electronics 11, Nr. 5 (04.03.2022): 812. http://dx.doi.org/10.3390/electronics11050812.

Der volle Inhalt der Quelle
Annotation:
To date, the use of synthetic data generation techniques in the health and wellbeing domain has been mainly limited to research activities. Although several open source and commercial packages have been released, they have been oriented to generating synthetic data as a standalone data preparation process and not integrated into a broader analysis or experiment testing workflow. In this context, the VITALISE project is working to harmonize Living Lab research and data capture protocols and to provide controlled processing access to captured data to industrial and scientific communities. In this paper, we present the initial design and implementation of our synthetic data generation approach in the context of VITALISE Living Lab controlled data processing workflow, together with identified challenges and future developments. By uploading data captured from Living Labs, generating synthetic data from them, developing analysis locally with synthetic data, and then executing them remotely with real data, the utility of the proposed workflow has been validated. Results have shown that the presented workflow helps accelerate research on artificial intelligence, ensuring compliance with data protection laws. The presented approach has demonstrated how the adoption of state-of-the-art synthetic data generation techniques can be applied for real-world applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Soundarajan, Sanjay, Sachira Kuruppu, Ashutosh Singh, Jongchan Kim und Monalisa Achalla. „SPARClink: an interactive tool to visualize the impact of the SPARC program“. F1000Research 11 (31.01.2022): 124. http://dx.doi.org/10.12688/f1000research.75071.1.

Der volle Inhalt der Quelle
Annotation:
The National Institutes of Health (NIH) Stimulating Peripheral Activity to Relieve Conditions (SPARC) program seeks to accelerate the development of therapeutic devices that modulate electrical activity in nerves to improve organ function. SPARC-funded researchers are generating rich datasets from neuromodulation research that are curated and shared according to FAIR (Findable, Accessible, Interoperable, and Reusable) guidelines and are accessible to the public on the SPARC data portal. Keeping track of the utilization of these datasets within the larger research community is a feature that will benefit data-generating researchers in showcasing the impact of their SPARC outcomes. This will also allow the SPARC program to display the impact of the FAIR data curation and sharing practices that have been implemented. This manuscript provides the methods and outcomes of SPARClink, our web tool for visualizing the impact of SPARC, which won the Second prize at the 2021 SPARC FAIR Codeathon. With SPARClink, we built a system that automatically and continuously finds new published SPARC scientific outputs (datasets, publications, protocols) and the external resources referring to them. SPARC datasets and protocols are queried using publicly accessible REST application programming interfaces (APIs, provided by Pennsieve and Protocols.io) and stored in a publicly accessible database. Citation information for these resources is retrieved using the NIH reporter API and National Center for Biotechnology Information (NCBI) Entrez system. A novel knowledge graph-based structure was created to visualize the results of these queries and showcase the impact that the FAIR data principles can have on the research landscape when they are adopted by a consortium.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wong, Ambrose H., Jessica M. Ray, Marc A. Auerbach, Arjun K. Venkatesh, Caitlin McVaney, Danielle Burness, Christopher Chmura et al. „Study protocol for the ACT response pilot intervention: development, implementation and evaluation of a systems-based Agitation Code Team (ACT) in the emergency department“. BMJ Open 10, Nr. 6 (Juni 2020): e036982. http://dx.doi.org/10.1136/bmjopen-2020-036982.

Der volle Inhalt der Quelle
Annotation:
IntroductionEmergency department (ED) visits for behavioural conditions are rising, with 1.7 million associated episodes of patient agitation occurring annually in acute care settings. When de-escalation techniques fail during agitation management, patients are subject to use of physical restraints and sedatives, which are associated with up to 37% risk of hypotension, apnoea and physical injuries. At the same time, ED staff report workplace violence due to physical assaults during agitation events. We recently developed a theoretical framework to characterise ED agitation, which identified teamwork as a critical component to reduce harm. Currently, no structured team response protocol for ED agitation addressing both patient and staff safety exists.Methods and analysisOur proposed study aims to develop and implement the agitation code team (ACT) response intervention, which will consist of a standardised, structured process with defined health worker roles/responsibilities, work processes and clinical protocols. First, we will develop the ACT response intervention in a two-step design loop; conceptual design will engage users in the creation of the prototype, and iterative refinement will occur through in situ simulated agitated patient encounters in the ED to assess and improve the design. Next, we will pilot the intervention in the clinical environment and use a controlled interrupted time series design to evaluate its effect on our primary outcome of patient restraint use. The intervention will be considered efficacious if we effectively lower the rate of restraint use over a 6-month period.Ethics and disseminationEthical approval by the Yale University Human Investigation Committee was obtained in 2019 (HIC #2000025113). Results will be disseminated through peer-reviewed publications and presentations at scientific meetings for each phase of the study. If this pilot is successful, we plan to formally integrate the ACT response intervention into clinical workflows at all EDs within our entire health system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Gomez-Diaz, Teresa, und Tomas Recio. „Research Software vs. Research Data II: Protocols for Research Data dissemination and evaluation in the Open Science context“. F1000Research 11 (07.10.2022): 117. http://dx.doi.org/10.12688/f1000research.78459.2.

Der volle Inhalt der Quelle
Annotation:
Background: Open Science seeks to render research outputs visible, accessible and reusable. In this context, Research Data and Research Software sharing and dissemination issues provide real challenges to the scientific community, as consequence of recent progress in political, legal and funding requirements. Methods: We take advantage from the approach we have developed in a precedent publication, in which we have highlighted the similarities between the Research Data and Research Software definitions. Results: The similarities between Research Data and Research Software definitions can be extended to propose protocols for Research Data dissemination and evaluation derived from those already proposed for Research Software dissemination and evaluation. We also analyze FAIR principles for these outputs. Conclusions: Our proposals here provide concrete instructions for Research Data and Research Software producers to make them more findable and accessible, as well as arguments to choose suitable dissemination platforms to complete the FAIR framework. Future work could analyze the potential extension of this parallelism to other kinds of research outputs that are disseminated under similar conditions to those of Research Data and Research Software, that is, without widely accepted publication procedures involving editors or other external actors and where the dissemination is usually restricted through the hands of the production team.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Gomez-Diaz, Teresa, und Tomas Recio. „Research Software vs. Research Data II: Protocols for Research Data dissemination and evaluation in the Open Science context“. F1000Research 11 (28.01.2022): 117. http://dx.doi.org/10.12688/f1000research.78459.1.

Der volle Inhalt der Quelle
Annotation:
Background: Open Science seeks to render research outputs visible, accessible and reusable. In this context, Research Data and Research Software sharing and dissemination issues provide real challenges to the scientific community, as consequence of recent progress in political, legal and funding requirements. Methods: We take advantage from the approach we have developed in a precedent publication, in which we have highlighted the similarities between the Research Data and Research Software definitions. Results: The similarities between Research Data and Research Software definitions can be extended to propose protocols for Research Data dissemination and evaluation derived from those already proposed for Research Software dissemination and evaluation. We also analyze FAIR principles for these outputs. Conclusions: Our proposals here provide concrete instructions for Research Data and Research Software producers to make them more findable and accessible, as well as arguments to choose suitable dissemination platforms to complete the FAIR framework. Future work could analyze the potential extension of this parallelism to other kinds of research outputs that are disseminated under similar conditions to those of Research Data and Research Software, that is, without widely accepted publication procedures involving editors or other external actors and where the dissemination is usually restricted through the hands of the production team.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Furxhi, Irini, Antti Joonas Koivisto, Finbarr Murphy, Sara Trabucco, Benedetta Del Secco und Athanasios Arvanitis. „Data Shepherding in Nanotechnology. The Exposure Field Campaign Template“. Nanomaterials 11, Nr. 7 (13.07.2021): 1818. http://dx.doi.org/10.3390/nano11071818.

Der volle Inhalt der Quelle
Annotation:
In this paper, we demonstrate the realization process of a pragmatic approach on developing a template for capturing field monitoring data in nanomanufacturing processes. The template serves the fundamental principles which make data scientifically Findable, Accessible, Interoperable and Reusable (FAIR principles), as well as encouraging individuals to reuse it. In our case, the data shepherds’ (the guider of data) template creation workflow consists of the following steps: (1) Identify relevant stakeholders, (2) Distribute questionnaires to capture a general description of the data to be generated, (3) Understand the needs and requirements of each stakeholder, (4) Interactive simple communication with the stakeholders for variables/descriptors selection, and (5) Design of the template and annotation of descriptors. We provide an annotated template for capturing exposure field campaign monitoring data, and increase their interoperability, while comparing it with existing templates. This paper enables the data creators of exposure field campaign data to store data in a FAIR way and helps the scientific community, such as data shepherds, by avoiding extensive steps for template creation and by utilizing the pragmatic structure and/or the template proposed herein, in the case of a nanotechnology project (Anticipating Safety Issues at the Design of Nano Product Development, ASINA).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Martinou, Eirini, und Angeliki Angelidi. „The role of open research in improving the standards of evidence synthesis: current challenges and potential solutions in systematic reviews“. F1000Research 11 (05.12.2022): 1435. http://dx.doi.org/10.12688/f1000research.127179.1.

Der volle Inhalt der Quelle
Annotation:
Systematic reviews (SRs) and meta-analyses (MAs) are the cornerstone of evidence-based medicine and are placed at the top of the level-of-evidence pyramid. To date, there are several methodological resources available from international organizations such as the Cochrane Collaboration that aim to aid researchers in conducting high-quality secondary research and promoting reproducibility, transparency and scientific rigour. Nevertheless, researchers still face challenges in most stages of evidence synthesis. Open research and the FAIR (findability, accessibility, interoperability, and reusability) principles are rising initiatives being increasingly implemented in primary research. However, their beneficial role in secondary research is less emphasized. This article addresses how the challenges commonly faced during evidence synthesis research could be overcome using open research practices and currently available open research tools. Despite the phenomenally simple SR workflow, researchers still find tasks such as framing the SR research question, search strategy development, data extraction, and assessing for bias, challenging. The implementation of FAIR practices, including prospective registration at the PROSPERO database, abiding with the PRISMA guidelines, and making all SR data openly available could have significant benefits in avoiding duplication of effort and reducing research waste while improving the reporting standards of SRs. Additionally, this article highlights the need for further education in open research culture to overcome ethical and motivational barriers in implementing open research practices in evidence synthesis. Finally, in the era of technological breakthroughs, artificial intelligence may eventually be incorporated into the process of SRs and should abide by the FAIR standards for open research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Tmušić, Goran, Salvatore Manfreda, Helge Aasen, Mike R. James, Gil Gonçalves, Eyal Ben-Dor, Anna Brook et al. „Current Practices in UAS-based Environmental Monitoring“. Remote Sensing 12, Nr. 6 (20.03.2020): 1001. http://dx.doi.org/10.3390/rs12061001.

Der volle Inhalt der Quelle
Annotation:
With the increasing role that unmanned aerial systems (UAS) are playing in data collection for environmental studies, two key challenges relate to harmonizing and providing standardized guidance for data collection, and also establishing protocols that are applicable across a broad range of environments and conditions. In this context, a network of scientists are cooperating within the framework of the Harmonious Project to develop and promote harmonized mapping strategies and disseminate operational guidance to ensure best practice for data collection and interpretation. The culmination of these efforts is summarized in the present manuscript. Through this synthesis study, we identify the many interdependencies of each step in the collection and processing chain, and outline approaches to formalize and ensure a successful workflow and product development. Given the number of environmental conditions, constraints, and variables that could possibly be explored from UAS platforms, it is impractical to provide protocols that can be applied universally under all scenarios. However, it is possible to collate and systematically order the fragmented knowledge on UAS collection and analysis to identify the best practices that can best ensure the streamlined and rigorous development of scientific products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Butcher, David S., Christian J. Brigham, James Berhalter, Abigail L. Centers, William M. Hunkapiller, Timothy P. Murphy, Eric C. Palm und Julia H. Smith. „Cybersecurity in a Large-Scale Research Facility—One Institution’s Approach“. Journal of Cybersecurity and Privacy 3, Nr. 2 (16.05.2023): 191–208. http://dx.doi.org/10.3390/jcp3020011.

Der volle Inhalt der Quelle
Annotation:
A cybersecurity approach for a large-scale user facility is presented—utilizing the National High Magnetic Field Laboratory (NHMFL) at Florida State University (FSU) as an example. The NHMFL provides access to the highest magnetic fields for scientific research teams from a range of disciplines. The unique challenges of cybersecurity at a widely accessible user facility are showcased, and relevant cybersecurity frameworks for the complex needs of a user facility with industrial-style equipment and hazards are discussed, along with the approach for risk identification and management, which determine cybersecurity requirements and priorities. Essential differences between information technology and research technology are identified, along with unique requirements and constraints. The need to plan for the introduction of new technology and manage legacy technologies with long usage lifecycles is identified in the context of implementing cybersecurity controls rooted in pragmatic decisions to avoid hindering research activities while enabling secure practices, which includes FAIR (findable, accessible, interoperable, and reusable) and open data management principles. The NHMFL’s approach to FAIR data management is presented. Critical success factors include obtaining resources to implement and maintain necessary security protocols, interdisciplinary and diverse skill sets, phased implementation, and shared allocation of NHMFL and FSU responsibilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Akgül, Seçkin, Carolin Offenhäuser, Anja Kordowski und Bryan W. Day. „Engineering Novel Lentiviral Vectors for Labelling Tumour Cells and Oncogenic Proteins“. Bioengineering 9, Nr. 3 (25.02.2022): 91. http://dx.doi.org/10.3390/bioengineering9030091.

Der volle Inhalt der Quelle
Annotation:
Lentiviral vectors are unique and highly efficient genetic tools to incorporate genetic materials into the genome of a variety of cells whilst conserving biosafety. Their rapid acceptance made it necessary to improve existing protocols, including molecular engineering and cloning, production of purified lentiviral particles, and efficient infection of target cells. In addition to traditional protocols, which can be time-consuming, several biotechnology companies are providing scientists with commercially available lentiviral constructs and particles. However, these constructs are limited by their original form, tend to be costly, and lack the flexibility to re-engineer based on the ever-changing needs of scientific projects. Therefore, the current study organizes the existing methods and integrates them with novel ideas to establish a protocol that is simple and efficient to implement. In this study we, (i) generated an innovative site-directed nucleotide attachment/replacement and DNA insertion method using unique PCR primers, (ii) improved traditional methods by integrating plasmid clarification steps, (iii) utilized endogenous mRNA as a resource to construct new lentiviruses, and (iv) identified an existing purification method and incorporated it into an organized workflow to produce high-yield lentiviral particle collection. Finally, (v) we verified and demonstrated the functional validity of our methods using an infection strategy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Stork, Lise, Andreas Weber, Eulàlia Miracle und Katherine Wolstencroft. „A Workflow for the Semantic Annotation of Field Books and Specimen Labels“. Biodiversity Information Science and Standards 2 (13.06.2018): e25839. http://dx.doi.org/10.3897/biss.2.25839.

Der volle Inhalt der Quelle
Annotation:
Geographical and taxonomical referencing of specimens and documented species observations from within and across natural history collections is vital for ongoing species research. However, much of the historical data such as field books, diaries and specimens, are challenging to work with. They are computationally inaccessable, refer to historical place names and taxonomies, and are written in a variety of languages. In order to address these challenges and elucidate historical species observation data, we developed a workflow to (i) crowd-source semantic annotations from handwritten species observations, (ii) transform them into RDF (Resource Description Framework) and (iii) store and link them in a knowledge base. Instead of full-transcription we directly annotate digital field books scans with key concepts that are based on Darwin Core standards. Our workflow stresses the importance of verbatim annotation. The interpretation of the historical content, such a resolving a historical taxon to a current one, can be done by individual researchers after the content is published as linked open data. Through the storage of annotion provenance, who created the annotation and when, we allow multiple interpretations of the content to exist in parallel, stimulating scientific discourse. The semantic annotation process is supported by a web application, the Semantic Field Book (SFB)-Annotator, driven by an application ontology. The ontology formally describes the content and meta-data required to semantically annotate species observations. It is based on the Darwin Core standard (DwC), Uberon and the Geonames ontology. The provenance of annotations is stored using the Web Annotation Data Model. Adhering to the principles of FAIR (Findable, Accessible, Interoperable & Reusable) and Linked Open Data, the content of the specimen collections can be interpreted homogeneously and aggregated across datasets. This work is part of the Making Sense project: makingsenseproject.org. The project aims to disclose the content of a natural history collection: a 17,000 page account of the exploration of the Indonesian Archipelago between 1820 and 1850 (Natuurkundige Commissie voor Nederlands-Indie) With a knowledge base, researchers are given easy access to the primary sources of natural history collections. For their research, they can aggregate species observations, construct rich queries to browse through the data and add their own interpretations regarding the meaning of the historical content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Skov, Flemming. „Science maps for exploration, navigation, and reflection—A graphic approach to strategic thinking“. PLOS ONE 16, Nr. 12 (31.12.2021): e0262081. http://dx.doi.org/10.1371/journal.pone.0262081.

Der volle Inhalt der Quelle
Annotation:
The world of science is growing at an unprecedented speed with more and more scholarly papers produced each year. The scientific landscape is constantly changing as research specialties evolve, merge or become obsolete. It is difficult for researchers, research managers and the public alike to keep abreast with these changes and maintain a true and fair overview of the world of science. Such an overview is necessary to stimulate scientific progress, to maintain flexible and responsive research organizations, and to secure collaboration and knowledge exchange between different research specialties and the wider community. Although science mapping is applied to a wide range of scientific areas, examples of their practical use are sparse. This paper demonstrates how to use a topical, scientific reference maps to understand and navigate in dynamic research landscapes and how to utilize science maps to facilitate strategic thinking. In this study, the research domain of biology at Aarhus University serves as an example. All scientific papers authored by the current, permanent staff were extracted (6,830 in total). These papers were used to create a semantic cognitive map of the research field using a co-word analysis based on keywords and keyword phrases. A workflow was written in Python for easy and fast retrieval of information for topic maps (including tokens from keywords section and title) to generate intelligible research maps, and to visualize the distribution of topics (keywords), papers, journal categories, individual researchers and research groups on any scale. The resulting projections revealed new insights into the structure of the research community and made it possible to compare researchers or research groups to describe differences and similarities, to find scientific overlaps or gaps, and to understand how they relate and connect. Science mapping can be used for intended (top-down) as well as emergent (bottom-up) strategy development. The paper concludes that science maps provide alternative views of the intricate structures of science to supplement traditional bibliometric information. These insights may help strengthen strategic thinking and boost creativity and thus contribute to the progress of science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Malapelle, Umberto, Francesco Pepe, Pasquale Pisapia, Roberta Sgariglia, Mariantonia Nacchio, Massimo Barberis, Michel Bilh et al. „TargetPlex FFPE-Direct DNA Library Preparation Kit for SiRe NGS panel: an international performance evaluation study“. Journal of Clinical Pathology 75, Nr. 6 (25.03.2021): 416–21. http://dx.doi.org/10.1136/jclinpath-2021-207450.

Der volle Inhalt der Quelle
Annotation:
AimNext generation sequencing (NGS) represents a key diagnostic tool to identify clinically relevant gene alterations for treatment-decision making in cancer care. However, the complex manual workflow required for NGS has limited its implementation in routine clinical practice. In this worldwide study, we validated the clinical performance of the TargetPlex FFPE-Direct DNA Library Preparation Kit for NGS analysis. Impressively, this new assay obviates the need for separate, labour intensive and time-consuming pre-analytical steps of DNA extraction, purification and isolation from formalin-fixed paraffin embedded (FFPE) specimens in the NGS workflow.MethodsThe TargetPlex FFPE-Direct DNA Library Preparation Kit, which enables NGS analysis directly from FFPE, was specifically developed for this study by TargetPlex Genomics Pleasanton, California. Eleven institutions agreed to take part in the study coordinated by the Molecular Cytopathology Meeting Group (University of Naples Federico II, Naples, Italy). All participating institutions received a specific Library Preparation Kit to test eight FFPE samples previously assessed with standard protocols. The analytical parameters and mutations detected in each sample were then compared with those previously obtained with standard protocols.ResultsOverall, 92.8% of the samples were successfully analysed with the TargetPlex FFPE-Direct DNA Library Preparation Kit on Thermo Fisher Scientific and Illumina platforms. Altogether, in comparison with the standard workflow, the TargetPlex FFPE-Direct DNA Library Preparation Kit was able to detect 90.5% of the variants.ConclusionThe TargetPlex FFPE-Direct DNA Library Preparation Kit combined with the SiRe panel constitutes a convenient, practical and robust cost-saving solution for FFPE NGS analysis in routine practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Lasorella, M., und E. Cantatore. „3D MODELS CITYGML-BASED COMBINED WITH TECHNICAL DECISION SUPPORT SYSTEM FOR THE SETTING UP OF DIGITAL CONSERVATION PLANS OF HISTORIC DISTRICTS“. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-2-2023 (24.06.2023): 911–18. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-2-2023-911-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. The setting up of recovery plans for historic districts requires a multi-level and multi-thematic process for their analysis and diagnosis to determine classes of priorities and interventions for buildings at the district scale of relevance. Traditional tools and protocols have already highlighted operative complexity and expensive activities, affecting the organicity and effectiveness in interpreting data. On the other hand, recent scientific and practical activities based on the use of parametric Digital Models and Informative Systems have highlighted their advantages in standardizing complex issues and knowledge. Recent work by the authors has defined the structured organization of technical knowledge for the creation of a digital recovery plan using Informative Parametric Models, based on descriptors, and primary and secondary factors. These aim at converting properties and information in qualitative and quantitative data, and then structuring dependencies on descriptors and primary factors, according to thematic taxonomies, existent ontologies for the geometric and semantic representation of urban and architectural entities, thematic normative and established approaches for the recovery of cultural and landscape heritage. Thus, the present work shows a workflow for the semi-automatic setting up of intervention classes for architectures in historic districts in Italy. It is structured on CityGML-based models, coherently implemented with a Technical Decision-Support System (T-DSS). Specifically, the T-DSS is determined considering the relations among thematic normative: UNI 11182, UNI/CEN TS 17385:2019, and Italian Consolidated Law on Building. The workflow is finally tested in the historic district of Ascoli Satriano, in the Apulia Region (South of Italy).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Silaigwana, Blessing, und Douglas Wassenaar. „Research Ethics Committees’ Oversight of Biomedical Research in South Africa: A Thematic Analysis of Ethical Issues Raised During Ethics Review of Non-Expedited Protocols“. Journal of Empirical Research on Human Research Ethics 14, Nr. 2 (24.01.2019): 107–16. http://dx.doi.org/10.1177/1556264618824921.

Der volle Inhalt der Quelle
Annotation:
In South Africa, biomedical research cannot commence until it has been reviewed and approved by a local research ethics committee (REC). There remains a dearth of empirical data on the nature and frequency of ethical issues raised by such committees. This study sought to identify ethical concerns typically raised by two South African RECs. Meeting minutes for 180 protocols reviewed between 2009 and 2014 were coded and analyzed using a preexisting framework. Results showed that the most frequent queries involved informed consent, respect for participants, and scientific validity. Interestingly, administrative issues (non-ethical) such as missing researchers’ CVs and financial contracts emerged more frequently than ethical questions such as favorable risk/benefit ratio and fair participant selection. Although not generalizable to all RECs, our data provide insights into two South African RECs’ review concerns. More education and awareness of the actual ethical issues typically raised by such committees might help improve review outcomes and relationships between researchers and RECs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Bénichou, Laurence, Marcus Guidoti, Isabelle Gérard, Donat Agosti, Tony Robillard und Fabio Cianferoni. „European Journal of Taxonomy: a deeper look into a decade of data“. European Journal of Taxonomy 782 (17.12.2021): 173–96. http://dx.doi.org/10.5852/ejt.2021.782.1597.

Der volle Inhalt der Quelle
Annotation:
The European Journal of Taxonomy (EJT) is a decade-old journal dedicated to the taxonomy of living and fossil eukaryotes. Launched in 2011, the EJT published exactly 900 articles (31 778 pages) from 2011 to 2021. The journal has been processed in its entirety by Plazi, liberating the data therein, depositing it into TreatmentBank, Biodiversity Literature Repository and disseminating it to partners, including the Global Biodiversity Information Facility (GBIF) using a combination of a highly automated workflow, quality control tools, and human curation. The dissemination of original research along with the ability to use and reuse data as freely as possible is the key to innovation, opening the corpus of known published biodiversity knowledge, and furthering advances in science. This paper aims to discuss the advantages and limitations of retro-conversion and to showcase the potential analyses of the data published in EJT and made findable, accessible, interoperable and reusable (FAIR) by Plazi. Among others, taxonomic and geographic coverage, geographical distribution of authors, citation of previous works and treatments, timespan between the publication and treatments with their cited works are discussed. Manually counted data were compared with the automated process, the latter being analysed and discussed. Creating FAIR data from a publication results in an average multiplication factor of 166 for additional access through the taxonomic treatments, figures and material citations citing the original publication in TreatmentBank, the Biodiversity Literature Repository and the Global Biodiversity Information Facility. Despite the advances in processing, liberating data remains cumbersome and has its limitations which lead us to conclude that the future of scientific publishing involves semantically enhanced publications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Schmelter, Carsten, Sebastian Funke, Jana Treml, Anja Beschnitt, Natarajan Perumal, Caroline Manicam, Norbert Pfeiffer und Franz Grus. „Comparison of Two Solid-Phase Extraction (SPE) Methods for the Identification and Quantification of Porcine Retinal Protein Markers by LC-MS/MS“. International Journal of Molecular Sciences 19, Nr. 12 (03.12.2018): 3847. http://dx.doi.org/10.3390/ijms19123847.

Der volle Inhalt der Quelle
Annotation:
Proper sample preparation protocols represent a critical step for liquid chromatography-mass spectrometry (LC-MS)-based proteomic study designs and influence the speed, performance and automation of high-throughput data acquisition. The main objective of this study was to compare two commercial solid-phase extraction (SPE)-based sample preparation protocols (comprising SOLAµTM HRP SPE spin plates from Thermo Fisher Scientific and ZIPTIP® C18 pipette tips from Merck Millipore) for analytical performance, reproducibility, and analysis speed. The house swine represents a promising animal model for studying human eye diseases including glaucoma and provides excellent requirements for the qualitative and quantitative MS-based comparison in terms of ocular proteomics. In total six technical replicates of two protein fractions [extracted with 0.1% dodecyl-ß-maltoside (DDM) or 1% trifluoroacetic acid (TFA)] of porcine retinal tissues were subjected to in-gel trypsin digestion and purified with both SPE-based workflows (N = 3) prior to LC-MS analysis. On average, 550 ± 70 proteins (1512 ± 199 peptides) and 305 ± 48 proteins (806 ± 144 peptides) were identified from DDM and TFA protein fractions, respectively, after ZIPTIP® C18 purification, and SOLAµTM workflow resulted in the detection of 513 ± 55 proteins (1347 ± 180 peptides) and 300 ± 33 proteins (722 ± 87 peptides), respectively (FDR < 1%). Venn diagram analysis revealed an average overlap of 65 ± 2% (DDM fraction) and 69 ± 4% (TFA fraction) in protein identifications between both SPE-based methods. Quantitative analysis of 25 glaucoma-related protein markers also showed no significant differences (P > 0.05) regarding protein recovery between both SPE methods. However, only glaucoma-associated marker MECP2 showed a significant (P = 0.02) higher abundance in ZIPTIP®-purified replicates in comparison to SOLAµTM-treated study samples. Nevertheless, this result was not confirmed in the verification experiment using in-gel trypsin digestion of recombinant MECP2 (P = 0.24). In conclusion, both SPE-based purification methods worked equally well in terms of analytical performance and reproducibility, whereas the analysis speed and the semi-automation of the SOLAµTM spin plates workflow is much more convenient in comparison to the ZIPTIP® C18 method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Rahma, Azhar T., Mahanna Elsheik, Bassam R. Ali, Iffat Elbarazi, George P. Patrinos, Luai A. Ahmed und Fatma Al Maskari. „Knowledge, Attitudes, and Perceived Barriers toward Genetic Testing and Pharmacogenomics among Healthcare Workers in the United Arab Emirates: A Cross-Sectional Study“. Journal of Personalized Medicine 10, Nr. 4 (09.11.2020): 216. http://dx.doi.org/10.3390/jpm10040216.

Der volle Inhalt der Quelle
Annotation:
In order to successfully translate the scientific models of genetic testing and pharmacogenomics into clinical practice, empowering healthcare workers with the right knowledge and functional understanding on the subject is essential. Limited research in the United Arab Emirates (UAE) have assessed healthcare worker stances towards genomics. This study aimed to assess healthcare workers’ knowledge and attitudes on genetic testing. A cross-sectional study was conducted among healthcare workers practicing in either public or private hospitals or clinics as pharmacists, nurses, physicians, managers, and allied health. Participants were recruited randomly and via snowball techniques. Surveys were collected between April and September 2019; out of 552 respondents, 63.4% were female, the mean age was 38 (±9.6) years old. The mean knowledge score was 5.2 (±2.3) out of nine, which shows a fair level of knowledge. The scores of respondents of pharmacy were 5.1 (±2.5), medicine 6.0 (±2.0), and nursing 4.8 (±2.1). All participants exhibited a fair knowledge level about genetic testing and pharmacogenomics. Of the respondents, 91.9% showed a positive attitude regarding availability of genetic testing. The top identified barrier to implementation was the cost of testing (62%), followed by lack of training or education and insurance coverage (57.8% and 57.2%, respectively). Building upon the positive attitudes and tackling the barriers and challenges will pave the road for full implementation of genetic testing and pharmacogenomics in the UAE. We recommend empowering healthcare workers by improving needed and tailored competencies related to their area of practice. We strongly urge the stakeholders to streamline and benchmark the workflow, algorithm, and guidelines to standardize the health and electronic system. Lastly, we advocate utilizing technology and electronic decision support as well as the translational report to back up healthcare workers in the UAE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lavarello, Chiara, Sebastiano Barco, Martina Bartolucci, Isabella Panfoli, Emanuele Magi, Gino Tripodi, Andrea Petretto und Giuliana Cangemi. „Development of an Accurate Mass Retention Time Database for Untargeted Metabolomic Analysis and Its Application to Plasma and Urine Pediatric Samples“. Molecules 26, Nr. 14 (13.07.2021): 4256. http://dx.doi.org/10.3390/molecules26144256.

Der volle Inhalt der Quelle
Annotation:
Liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is currently the method of choice for untargeted metabolomic analysis. The availability of established protocols to achieve a high confidence identification of metabolites is crucial. The aim of this work is to describe the workflow that we have applied to build an Accurate Mass Retention Time (AMRT) database using a commercial metabolite library of standards. LC-HRMS analysis was carried out using a Vanquish Horizon UHPLC system coupled to a Q-Exactive Plus Hybrid Quadrupole-Orbitrap Mass Spectrometer (Thermo Fisher Scientific, Milan, Italy). The fragmentation spectra, obtained with 12 collision energies, were acquired for each metabolite, in both polarities, through flow injection analysis. Several chromatographic conditions were tested to obtain a protocol that yielded stable retention times. The adopted chromatographic protocol included a gradient separation using a reversed phase (Waters Acquity BEH C18) and a HILIC (Waters Acquity BEH Amide) column. An AMRT database of 518 compounds was obtained and tested on real plasma and urine samples analyzed in data-dependent acquisition mode. Our AMRT library allowed a level 1 identification, according to the Metabolomics Standards Initiative, of 132 and 124 metabolites in human pediatric plasma and urine samples, respectively. This library represents a starting point for future metabolomic studies in pediatric settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Goracci, Cecilia, Jovana Juloski, Claudio D’Amico, Dario Balestra, Alessandra Volpe, Jelena Juloski und Alessandro Vichi. „Clinically Relevant Properties of 3D Printable Materials for Intraoral Use in Orthodontics: A Critical Review of the Literature“. Materials 16, Nr. 6 (08.03.2023): 2166. http://dx.doi.org/10.3390/ma16062166.

Der volle Inhalt der Quelle
Annotation:
The review aimed at analyzing the evidence available on 3D printable materials and techniques used for the fabrication of orthodontic appliances, focusing on materials properties that are clinically relevant. MEDLINE/PubMed, Scopus, and Cochrane Library databases were searched. Starting from an initial retrieval of 669 citations, 47 articles were finally included in the qualitative review. Several articles presented proof-of-concept clinical cases describing the digital workflow to manufacture a variety of appliances. Clinical studies other than these case reports are not available. The fabrication of aligners is the most investigated application of 3D printing in orthodontics, and, among materials, Dental LT Clear Resin (Formlabs) has been tested in several studies, although Tera Harz TC-85 (Graphy) is currently the only material specifically marketed for direct printing of aligners. Tests of the mechanical properties of aligners materials lacked homogeneity in the protocols, while biocompatibility tests failed to assess the influence of intraoral conditions on eluents release. The aesthetic properties of 3D-printed appliances are largely unexplored. The evidence on 3D-printed metallic appliances is also limited. The scientific evidence on 3D printable orthodontic materials and techniques should be strengthened by defining international standards for laboratory testing and by starting the necessary clinical trials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Hedlund, Nancy, Idal Beer, Torsten Hoppe-Tichy und Patricia Trbovich. „Systematic evidence review of rates and burden of harm of intravenous admixture drug preparation errors in healthcare settings“. BMJ Open 7, Nr. 12 (Dezember 2017): e015912. http://dx.doi.org/10.1136/bmjopen-2017-015912.

Der volle Inhalt der Quelle
Annotation:
ObjectiveTo examine published evidence on intravenous admixture preparation errors (IAPEs) in healthcare settings.MethodsSearches were conducted in three electronic databases (January 2005 to April 2017). Publications reporting rates of IAPEs and error types were reviewed and categorised into the following groups: component errors, dose/calculation errors, aseptic technique errors and composite errors. The methodological rigour of each study was assessed using the Hawker method.ResultsOf the 34 articles that met inclusion criteria, 28 reported the site of IAPEs: central pharmacies (n=8), nursing wards (n=14), both settings (n=4) and other sites (n=3). Using the Hawker criteria, 14% of the articles were of good quality, 74% were of fair quality and 12% were of poor quality. Error types and reported rates varied substantially, including wrong drug (~0% to 4.7%), wrong diluent solution (0% to 49.0%), wrong label (0% to 99.0%), wrong dose (0% to 32.6%), wrong concentration (0.3% to 88.6%), wrong diluent volume (0.06% to 49.0%) and inadequate aseptic technique (0% to 92.7%)%). Four studies directly compared incidence by preparation site and/or method, finding error incidence to be lower for doses prepared within a central pharmacy versus the nursing ward and lower for automated preparation versus manual preparation. Although eight studies (24%) reported ≥1 errors with the potential to cause patient harm, no study directly linked IAPE occurrences to specific adverse patient outcomes.ConclusionsThe available data suggest a need to continue to optimise the intravenous preparation process, focus on improving preparation workflow, design and implement preventive strategies, train staff on optimal admixture protocols and implement standardisation. Future research should focus on the development of consistent error subtype definitions, standardised reporting methodology and reliable, reproducible methods to track and link risk factors with the burden of harm associated with these errors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Argentati, Chiara, Ilaria Tortorella, Martina Bazzucchi, Francesco Morena und Sabata Martino. „Harnessing the Potential of Stem Cells for Disease Modeling: Progress and Promises“. Journal of Personalized Medicine 10, Nr. 1 (06.02.2020): 8. http://dx.doi.org/10.3390/jpm10010008.

Der volle Inhalt der Quelle
Annotation:
Ex vivo cell/tissue-based models are an essential step in the workflow of pathophysiology studies, assay development, disease modeling, drug discovery, and development of personalized therapeutic strategies. For these purposes, both scientific and pharmaceutical research have adopted ex vivo stem cell models because of their better predictive power. As matter of a fact, the advancing in isolation and in vitro expansion protocols for culturing autologous human stem cells, and the standardization of methods for generating patient-derived induced pluripotent stem cells has made feasible to generate and investigate human cellular disease models with even greater speed and efficiency. Furthermore, the potential of stem cells on generating more complex systems, such as scaffold-cell models, organoids, or organ-on-a-chip, allowed to overcome the limitations of the two-dimensional culture systems as well as to better mimic tissues structures and functions. Finally, the advent of genome-editing/gene therapy technologies had a great impact on the generation of more proficient stem cell-disease models and on establishing an effective therapeutic treatment. In this review, we discuss important breakthroughs of stem cell-based models highlighting current directions, advantages, and limitations and point out the need to combine experimental biology with computational tools able to describe complex biological systems and deliver results or predictions in the context of personalized medicine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Schreiner, L. J. „3D2: Three Decades of Three-Dimensional Dosimetry“. Journal of Physics: Conference Series 2630, Nr. 1 (01.11.2023): 012001. http://dx.doi.org/10.1088/1742-6596/2630/1/012001.

Der volle Inhalt der Quelle
Annotation:
Abstract The development of three-dimensional (3D) dosimetry was motivated by its promise as an effective methodology for the validation of the complex dose distributions achieved by modern techniques such as Intensity Modulated and Volumetric Arc Radiation Therapy. 3D techniques were first proposed in the 1980s when clinics were just starting to move from two-dimensional contour plan based delivery to more conformal techniques. Advances in dosimeter materials, readout systems, and workflow and software systems for the registration and analysis of the volumetric dose data have made 3D dosimetry more attainable, yet to date it has not made major inroads into the clinic. This keynote address will highlight some 3D dosimetry developments over the years, many of which were first revealed through the past twelve International Conferences on 3D and Advanced Radiation Dosimetry (IC3Ddose). These conferences resulted in the publication of more than 130 didactic review articles and over 650 proffered research papers, the majority openly available on the internet (years before the current drive to publishing in open access journals). In this joint keynote address to the IC3Ddose community (at the end of its conference) and to attendees of the Canadian Organization of Medical Physicists annual scientific meeting (at its start), I will briefly review the radiation sensitive materials used for 3D dosimetry, the imaging systems required to read out the volumetric dose information, the workflows and systems required for efficient analysis, and the protocols required for reproducible dosimetry, and how the dosimetry has come into the clinic. The address includes some personal reflections of the motivational and practical changes in 3D dosimetry over time. And as we are all meeting in person for the first time in over two years, the address will end with some observations on the importance of conferences for the exchange of ideas and associated debate necessary for scientific advancement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

François, Paul, Jeffrey Leichman, Florent Laroche und Françoise Rubellin. „Virtual reality as a versatile tool for research, dissemination and mediation in the humanities“. Virtual Archaeology Review 12, Nr. 25 (14.07.2021): 1. http://dx.doi.org/10.4995/var.2021.14880.

Der volle Inhalt der Quelle
Annotation:
<p class="VARAbstract">The VESPACE project aims to revive an evening of theatre at the <em>Foire Saint-Germain</em> in Paris in the 18<sup>th</sup> century, by recreating spaces, atmospheres and theatrical entertainment in virtual reality. The venues of this fair have disappeared without leaving any archaeological traces, so their digital reconstruction requires the use of many different sources, including the expertise of historians, historians of theatre and literature. In this article, we present how we have used video game creation tools to enable the use of virtual reality in three key stages of research in the human sciences and particularly in history or archaeology: preliminary research, scientific dissemination and mediation with the general public. In particular, we detail the methodology used to design a three-dimensional (3D) model that is suitable for both research and virtual reality visualization, meets the standards of scientific work regarding precision and accuracy, and the requirements of a real-time display. This model becomes an environment in which experts can be immersed within their fields of research and expertise, and thus extract knowledge reinforcing the model created –through comments, serendipity and new perspectives– while enabling a multidisciplinary workflow. We also present our tool for annotating and consulting sources, relationships and hypotheses in immersion, called PROUVÉ. This tool is designed to make the virtual reality experience go beyond a simple image and to convey scientific information and theories in the same way an article or a monograph does. Finally, this article offers preliminary feedback on the use of our solutions with three target audiences: the researchers from our team, the broader theatre expert community and the general public.</p><p class="VARAbstract">Highlights:</p><p>• Immersive Virtual Reality is used to enhance the digital reconstruction of an 18th-century theatre, by allowing experts to dive into their research topic.</p><p>• Virtual Reality (VR) can also be used to disseminate the digital model through the scientific community and beyond while giving access to all kinds of sources that were used to build it.</p><p>• A quick survey shows that VR is a powerful tool to share theories and interpretations related to archaeological or historical tri-dimensional data.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gilissen, Valentijng, und Hella Hollander. „Archiving the Past While Keeping up with the Times“. Studies in Digital Heritage 1, Nr. 2 (14.12.2017): 194–205. http://dx.doi.org/10.14434/sdh.v1i2.23238.

Der volle Inhalt der Quelle
Annotation:
The e-depot for Dutch archaeology started as a project at Data Archiving and Networked Services (DANS) in 2004 and developed into a successful service, which has ever since been part of the national archaeological data workflow of the Netherlands.While continuously processing archaeological datasets and publications and developing expertise regarding data preservation, various developments are taking place in the data landscape and direct involvement is necessary to ensure that the needs of the designated community are best met. Standard protocols must be defined for the processing of data with the best guarantees for long-term preservation and accessibility. Monitoring the actual use of file formats and the use of their significant characteristics within specific scientific disciplines is needed to keep strategies up-to-date.National developments includes the definition of a national metadata exchange protocol, its accommodation in the DANS EASY self-deposit archive and its role in the central channelling of information submission. In international context, projects such as ARIADNE and PARTHENOS enable further developments regarding data preservation and dissemination. The opportunities provided by such international projects enriched the data by improving options for data reuse, including allowing for the implementation of a map-based search facility on DANS EASY. The projects also provide a platform for sharing of expertise via international collaboration.This paper will detail the positioning of the data archive in the research data cycle and show examples of the data enrichment enabled by collaboration within international projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Terziyski, Atanas, Stoyan Tenev, Vedrin Jeliazkov, Nina Jeliazkova und Nikolay Kochev. „METER.AC: Live Open Access Atmospheric Monitoring Data for Bulgaria with High Spatiotemporal Resolution“. Data 5, Nr. 2 (08.04.2020): 36. http://dx.doi.org/10.3390/data5020036.

Der volle Inhalt der Quelle
Annotation:
Detailed atmospheric monitoring data are notoriously difficult to obtain for some geographic regions, while they are of paramount importance in scientific research, forecasting, emergency response, policy making, etc. We describe a continuously updated dataset, METER.AC, consisting of raw measurements of atmospheric pressure, temperature, relative humidity, particulate matter, and background radiation in about 100 locations in Bulgaria, as well as some derived values such as sea-level atmospheric pressure, dew/frost point, and hourly trends. The measurements are performed by low-power maintenance-free nodes with common hardware and software, which are specifically designed and optimized for this purpose. The time resolution of the measurements is 5 min. The short-term aim is to deploy at least one node per 100 km2, while uniformly covering altitudes between 0 and 3000 m asl with a special emphasis on remote mountainous areas. A full history of all raw measurements (non-aggregated in time and space) is publicly available, starting from September 2018. We describe the basic technical characteristics of our in-house developed equipment, data organization, and communication protocols as well as present some use case examples. The METER.AC network relies on the paradigm of the Internet of Things (IoT), by collecting data from various gauges. A guiding principle in this work is the provision of findable, accessible, interoperable, and reusable (FAIR) data. The dataset is in the public domain, and it provides resources and tools enabling citizen science development in the context of sustainable development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Zhang, Qian, Heidi Imker, Chunyan Li, Bertram Ludäscher und Megan Senseney. „Using a Computational Study of Hydrodynamics in the Wax Lake Delta to Examine Data Sharing Principles“. International Journal of Digital Curation 11, Nr. 2 (04.07.2017): 138–55. http://dx.doi.org/10.2218/ijdc.v11i2.433.

Der volle Inhalt der Quelle
Annotation:
In this paper we describe a complex dataset used to study the circulation and wind-driven flows in the Wax Lake Delta, Louisiana, USA under winter storm conditions. The whole package bundles a large dataset (approximately 74 GB), which includes the numerical model, software and scripts for data analysis and visualization, as well as detailed documentation. The raw data came from multiple external sources, including government agencies, community repositories, and deployed field instruments and surveys. Each raw dataset goes through the processes of data QA/QC, data analysis, visualization, and interpretation. After integrating multiple datasets, new data products are obtained which are then used with the numerical model. The numerical model undergoes model verification, testing, calibration, and optimization. With a complex algorithm of computation, the model generates a structured output dataset, which is, after post-data analysis, presented as informative scientific figures and tables that allow interpretations and conclusions contributing to the science of coastal physical oceanography. Performing this study required a tremendous amount of effort. While the work resulted in traditional dissemination via a thesis, journal articles and conference proceedings, more can be gained. The data can be reused to study reproducibility or as preliminary investigation to explore a new topic. With thorough documentation and well-organized data, both the input and output dataset should be ready for sharing in a domain or institutional repository. Furthermore, the data organization and documentation also serves as a guideline for future research data management and the development of workflow protocols. Here we will describe the dataset created by this study, how sharing the dataset publicly could enable validation of the current study and extension by new studies, and the challenges that arise prior to sharing the dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Reichsöllner, Emanuel, Andreas Freymann, Mirko Sonntag und Ingo Trautwein. „SUMO4AV: An Environment to Simulate Scenarios for Shared Autonomous Vehicle Fleets with SUMO Based on OpenStreetMap Data“. SUMO Conference Proceedings 3 (29.09.2022): 83–94. http://dx.doi.org/10.52825/scp.v3i.113.

Der volle Inhalt der Quelle
Annotation:
In the past years the progress in the development of autonomous vehicles has increased tremendously. There are still technical, infrastructural and regulative obstacles which need to be overcome. However, there is a clear consent among experts that fully autonomous vehicles (level 5 of driving automation) will become reality in the coming years or at least in the coming decades. When fully autonomous vehicles are widely available for a fair trip price and when they can easily be utilized, a big shift from privately owned cars to carsharing will happen. On the one hand, this shift can bring a lot of chances for cities like the need of less parking space. But on the other hand, there is the risk of an increased traffic when walking or biking trips are substituted by trips with shared autonomous vehicle fleets. While the expected social, ecological and economical impact of widely used shared autonomous vehicle fleets is tremendous, there are hardly any scientific studies or data available for the effects on cities and municipalities. The research project KI4ROBOFLEET addressed this demand. A result of the project was SUMO4AV, a simulation environment for shared autonomous vehicle fleets, which we present in this paper. This simulation tool is based on SUMO, an open-source traffic simulation package. SUMO4AV can support city planners and carsharing companies to evaluate the chances and risks of running shared autonomous fleets in their local environment with their specific infrastructure. At its core it comprises the mapping of OpenStreetMap1 entities into SUMO objects as well as a Scenario Builder to create different operation scenarios for autonomous driving. Additionally, the simulation tool offers a recursive execution with different fleet sizes and optimization strategies evaluated by economic and ecologic parameters. As evaluation of the toolset a simulation of an ordinary scenario was performed. The workflow to simulate the scenario for shared autonomous vehicle fleets was successfully processed with the SUMO4AV environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Soiland-Reyes, Stian, Peter Sefton, Leyla Jael Castro, Frederik Coppens, Daniel Garijo, Simone Leo, Marc Portier und Paul Groth. „Creating lightweight FAIR Digital Objects with RO-Crate“. Research Ideas and Outcomes 8 (12.10.2022). http://dx.doi.org/10.3897/rio.8.e93937.

Der volle Inhalt der Quelle
Annotation:
RO-Crate (Soiland-Reyes et al. 2022) is a lightweight method to package research outputs along with their metadata, based on Linked Data principles (Bizer et al. 2009) and W3C standards. RO-Crate provides a flexible mechanism for researchers archiving and publishing rich data packages (or any other research outcome) by capturing their dependencies and context. However, additional measures should be taken to ensure that a crate is also following the FAIR principles (Wilkinson 2016), including consistent use of persistent identifiers, provenance, community standards, clear machine/human-readable licensing for metadata and data, and Web publication of RO-Crates. The FAIR Digital Object (FDO) approach (De Smedt et al. 2020) gives a set of recommendations that aims to improve findability, accessibility, interoperability and reproducibility for any digital object, allowing implementation through different protocols or standards. Here we present how we have followed the FDO recommendations and turned research outcomes into FDOs by publishing RO-Crates on the Web using HTTP, following best practices for Linked Data. We highlight challenges and advantages of the FDO approach, and reflect on what is required for an FDO profile to achieve FAIR RO-Crates. The implementation allows for a broad range of use cases, across scientific domains. A minimal RO-Crate may be represented as a persistent URI resolving to a summary website describing the outputs in a scientific investigation (e.g. https://w3id.org/dgarijo/ro/sepln2022 with links to the used datasets along with software). One of the advantages of RO-Crates is flexibility, particularly regarding the metadata accompanying the actual research outcome. RO-Crate extends schema.org, a popular vocabulary for describing resources on the Web (Guha et al. 2016). A generic RO-Crate is not required to be typed beyond Dataset*1. In practice, RO-Crates declare conformance to particular profiles, allowing processing based on the specific needs and assumptions of a community or usage scenario. This, effectively, makes RO-Crates typed and thus machine-actionable. RO-Crate profiles serve as metadata templates, making it easier for communities to agree and build upon their own metadata needs. RO-Crates have been combined with machine-actionable Data Management Plans (maDMPs) to automate and facilitate management of research data (Miksa et al. 2020). This mapping allows RO-Crates to be generated out of maDMPs and vice versa. The ELIXIR Software Management Plans (Alves et al. 2021) is planning to move their questionnaire to a machine-actionable format with RO-Crate. ELIXIR Biohackathon 2022 will explore integration of RO-Crate and the Data Stewardship Wizard (Pergl et al. 2019) with Galaxy, which can automate FDO creation that also follows data management plans. A tailored RO-Crate profile has been defined to represent Electronic Lab Notebooks (ELN) protocols bundled together with metadata and related datasets. Schröder et al. (2022) uses RO-Crates to encode provenance information at different levels, including researchers, manufacturers, biological and chemical resources, activities, measurements, and resulting research data. The use of RO-Crates makes it easier to programmatically question-answer information related to the protocols, for instance activities, resources and equipment used to create data. Another example is WorkflowHub (Goble et al. 2021) which defines the Workflow RO-Crate profile (Bacall et al. 2022), imposing additional constraints such as the presence of a main workflow and a license. It also specifies which entity types and properties must be used to provide such information, implicitly defining a set of operations (e.g., get the main workflow and its language) that are valid on all complying crates. The workflow system Galaxy (The Galaxy Community 2022) retrieves such Workflow Crates using GA4GH TRS API. The workflow profile has been further extended (with OOP-like inheritance) in Workflow Testing RO-Crate, adding formal workflow testing components: this adds operations such as getting remote test instances and test definitions, used by the LifeMonitor service to keep track of the health status of multiple published workflows. While RO-Crates use Web technologies, they are also self-contained, moving data along with their metadata. This is a powerful construct for interoperability across FAIR repositories, but this raises some challenges with regards to mutability and persistence of crates. To illustrate how such challenges can be handled, we detail how the WorkflowHub repository follows several FDO principles: Workflow entries must be frozen for editing and have complete kernel metadata (title, authors, license, description) [FDOF4] before they can be assigned a persistent identifier, e.g. https://doi.org/10.48546/workflowhub.workflow.255.1 [FDOF1] Computational workflows can be composed of multiple files used as a whole, e.g. CWL files in a GitHub repository. These are snapshotted as a single RO-Crate ZIP, indicating the main workflow. [FDOF11] PID resolution can content-negotiate to Datacite’s PID metadata [FDOF2] or use FAIR Signposting to find an RO-Crate containing the workflow [FDOF3] and richer JSON-LD metadata resources [FDOF5,FDOF8], see Fig. 1 Metadata uses schema.org [FDOF7] following the community-developed Bioschemas ComputationalWorkflow profile [FDOF10]. Workflows are discovered using the GA4GH TRS API [FDOF5,FDOF6,FDOF11] and created/modified using CRUD operations [FDOF6] The RO-Crate profile, effectively the FDO Type [FDOF7], is declared as https://w3id.org/workflowhub/workflow-ro-crate/1.0; the workflow language (e.g. https://w3id.org/workflowhub/workflow-ro-crate#galaxy) is defined in metadata of the main workflow. Workflow entries must be frozen for editing and have complete kernel metadata (title, authors, license, description) [FDOF4] before they can be assigned a persistent identifier, e.g. https://doi.org/10.48546/workflowhub.workflow.255.1 [FDOF1] Computational workflows can be composed of multiple files used as a whole, e.g. CWL files in a GitHub repository. These are snapshotted as a single RO-Crate ZIP, indicating the main workflow. [FDOF11] PID resolution can content-negotiate to Datacite’s PID metadata [FDOF2] or use FAIR Signposting to find an RO-Crate containing the workflow [FDOF3] and richer JSON-LD metadata resources [FDOF5,FDOF8], see Fig. 1 Metadata uses schema.org [FDOF7] following the community-developed Bioschemas ComputationalWorkflow profile [FDOF10]. Workflows are discovered using the GA4GH TRS API [FDOF5,FDOF6,FDOF11] and created/modified using CRUD operations [FDOF6] The RO-Crate profile, effectively the FDO Type [FDOF7], is declared as https://w3id.org/workflowhub/workflow-ro-crate/1.0; the workflow language (e.g. https://w3id.org/workflowhub/workflow-ro-crate#galaxy) is defined in metadata of the main workflow. Further work on RO-Crate profiles include to formalise links to the API operations and repositories [FDOF5,FDOF7], to include PIDs of profiles and types in the FAIR Signposting, and HTTP navigation to individual resources within the RO-Crate. RO-Crate has shown a broad adoption by communities across many scientific disciplines, providing a lightweight, and therefore easy to adopt, approach to generating FAIR Digital Objects. It is rapidly becoming an integral part of the interoperability fabric between the different components as demonstrated here for WorkflowHub, contributing to building the European Open Science Cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Child, Andrew Wright, Jennifer Hinds, Lucas Sheneman und Sven Buerki. „Centralized project-specific metadata platforms: toolkit provides new perspectives on open data management within multi-institution and multidisciplinary research projects“. BMC Research Notes 15, Nr. 1 (18.03.2022). http://dx.doi.org/10.1186/s13104-022-05996-3.

Der volle Inhalt der Quelle
Annotation:
AbstractOpen science and open data within scholarly research programs are growing both in popularity and by requirement from grant funding agencies and journal publishers. A central component of open data management, especially on collaborative, multidisciplinary, and multi-institutional science projects, is documentation of complete and accurate metadata, workflow, and source code in addition to access to raw data and data products to uphold FAIR (Findable, Accessible, Interoperable, Reusable) principles. Although best practice in data/metadata management is to use established internationally accepted metadata schemata, many of these standards are discipline-specific making it difficult to catalog multidisciplinary data and data products in a way that is easily findable and accessible. Consequently, scattered and incompatible metadata records create a barrier to scientific innovation, as researchers are burdened to find and link multidisciplinary datasets. One possible solution to increase data findability, accessibility, interoperability, reproducibility, and integrity within multi-institutional and interdisciplinary projects is a centralized and integrated data management platform. Overall, this type of interoperable framework supports reproducible open science and its dissemination to various stakeholders and the public in a FAIR manner by providing direct access to raw data and linking protocols, metadata and supporting workflow materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rios, Nelson, Sharif Islam, James Macklin und Andrew Bentley. „Technical Considerations for a Transactional Model to Realize the Digital Extended Specimen“. Biodiversity Information Science and Standards 5 (03.09.2021). http://dx.doi.org/10.3897/biss.5.73812.

Der volle Inhalt der Quelle
Annotation:
Technological innovations over the past two decades have given rise to the online availability of more than 150 million specimen and species-lot records from biological collections around the world through large-scale biodiversity data-aggregator networks. In the present landscape of biodiversity informatics, collections data are captured and managed locally in a wide variety of databases and collection management systems and then shared online as point-in-time Darwin Core archive snapshots. Data providers may publish periodic revisions to these data files, which are retrieved, processed and re-indexed by data aggregators. This workflow has resulted in data latencies and lags of months to years for some data providers. The Darwin Core Standard Wieczorek et al. (2012) provides guidelines for representing biodiversity information digitally, yet varying institutional practices and lack of interoperability between Collection Management Systems continue to limit semantic uniformity, particularly with regard to the actual content of data within each field. Although some initiatives have begun to link data elements, our ability to comprehensively link all of the extended data associated with a specimen, or related specimens, is still limited due to the low uptake and usage of persistent identifiers. The concept now under consideration is to create a Digital Extended Specimen (DES) that adheres to the tenets of Findable, Accessible, Interoperable and Reusable (FAIR) data management of stewardship principles and is the cumulative digital representation of all data, derivatives and products associated with a physical specimen, which are individually distinguished and linked by persistent identifiers on the Internet to create a web of knowledge. Biodiversity data aggregators that mobilize data across multiple institutions routinely perform data transformations in an attempt to provide a clean and consistent interpretation of the data. These aggregators are typically unable to interact directly with institutional data repositories, thereby limiting potentially fruitful opportunities for annotation, versioning, and repatriation. The ability to track such data transactions and satisfy the accompanying legal implications (e.g. Nagoya Protocol) is becoming a necessary component of data publication which existing standards do not adequately address. Furthermore, no mechanisms exist to assess the “trustworthiness” of data, critical to scientific integrity, reproducibility or to provide attribution metrics for collections to advocate for their contribution or effectiveness in supporting such research. Since the introduction of Darwin Core Archives Wieczorek et al. (2012) little has changed in the underlying mechanisms for publishing natural science collections data and we are now at a point where new innovations are required to meet current demand for continued digitization, access, research and management. One solution may involve changing the biodiversity data publication paradigm to one based on the atomized transactions relevant to each individual data record. These transactions, when summed over time, allows us us to realize the most recently accepted revision as well as historical and alternative perspectives. In order to realize the Digital Extended Specimen ideals and the linking of data elements, this transactional model combined with open and FAIR data protocols, application programming interfaces (APIs), repositories, and workflow engines can provide the building blocks for the next generation of natural science collections and biodiversity data infrastructures and services. These and other related topics have been the focus of phase 2 of the global consultation on converging Digital Specimens and Extended Specimens. Based on these discussions, this presentation will explore a conceptual solution leveraging elements from distributed version control, cryptographic ledgers and shared redundant storage to overcome many of the shortcomings of contemporary approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Penev, Lyubomir, Dimitrios Koureas, Quentin Groom, Jerry Lanfear, Donat Agosti, Ana Casino, Joe Miller et al. „The Biodiversity Knowledge Hub (BKH): a crosspoint and knowledge broker for FAIR and linked biodiversity data“. Biodiversity Information Science and Standards 7 (24.08.2023). http://dx.doi.org/10.3897/biss.7.111482.

Der volle Inhalt der Quelle
Annotation:
The Biodiversity Knowledge Hub (BKH) is a web platform acting as an integration point and broker of an open, FAIR (Findable, Accessible, Interoperable, Reusable) and interlinked corpora of biodiversity data, services and knowledge. It serves the entire biodiversity research cycle, from specimens and observations to sequences, taxon names and finally to scientific publications. The strategic aim of the BKH is to support a functional and integrated biodiversity knowledge graph and an emerging new community of users. The BKH is aimed at biodiversity researchers in the widest sense, research infrastructures and publishers (Fig. 1). The BKH is the key product of the EU-funded Biodiversity Community Integrated Knowledge Library (BiCIKL) project (Penev et al. 2022). The four goals of BiCIKL and the BKH are: Improved access to open and FAIR biodiversity data; Establishing of bi-directional data linkages between infrastructures; Development of new methods and workflows for semantic publishing, harvesting, liberating, linking, accessing and re-using of data in literature (specimens, material citations, samples, sequences, taxonomic names, taxonomic treatments, figures, tables); Testing and implementation of services through use cases and open call projects for researchers outside the project. Improved access to open and FAIR biodiversity data; Establishing of bi-directional data linkages between infrastructures; Development of new methods and workflows for semantic publishing, harvesting, liberating, linking, accessing and re-using of data in literature (specimens, material citations, samples, sequences, taxonomic names, taxonomic treatments, figures, tables); Testing and implementation of services through use cases and open call projects for researchers outside the project. The BKH consists of several modules, such as the Home page that presents the main user groups and the benefits that the BKH provides to them. It has guidelines and protocols, such as various documents on the policies, functions, and recommendations for the users. And it has relevant projects, that use linked FAIR biodiversity data. In the core of the BKH is the FAIR Data Place (FDP), which presents novel services and tools developed over the course of BiCIKL. In the future, the FDP will also accept services for linked data provided by new contributors. The FDP consists of three sub-modules: Infrastructures and organisations: Lists the contributing organisations and research infrastructures with short descriptions and links to their data, tools and services. Research infrastructures are sorted by the main type of biodiversity data they aggregate and serve: specimens, sequences, taxon names and literature. Linked data services: A catalogue of novel services that deliver FAIR data linked between the participating research infrastructures. Examples of such services are: ChecklistBank, LifeBlock, OpenBiodiv, TreatmentBank, SIBiLS “BiodiversityPMC”, eBioDiv, SynoSpecies, PlutoF Curation Tool and others. Become a contributor application form: A formal questionnaire which serves as a basis to check the suitability of an organisation or research infrastructure to join the BKH. Part of the application form is a FAIR data checklist. Infrastructures and organisations: Lists the contributing organisations and research infrastructures with short descriptions and links to their data, tools and services. Research infrastructures are sorted by the main type of biodiversity data they aggregate and serve: specimens, sequences, taxon names and literature. Linked data services: A catalogue of novel services that deliver FAIR data linked between the participating research infrastructures. Examples of such services are: ChecklistBank, LifeBlock, OpenBiodiv, TreatmentBank, SIBiLS “BiodiversityPMC”, eBioDiv, SynoSpecies, PlutoF Curation Tool and others. Become a contributor application form: A formal questionnaire which serves as a basis to check the suitability of an organisation or research infrastructure to join the BKH. Part of the application form is a FAIR data checklist. The BKH serves as a navigation system in a universe of interconnected biodiversity research infrastructures and is open to new contributors and collaborators in accessing open data and knowledge by anybody, anywhere, at any time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Tournoy, Raphaël. „Episciences overlay journals“. Septentrio Conference Series, Nr. 1 (14.09.2023). http://dx.doi.org/10.7557/5.7147.

Der volle Inhalt der Quelle
Annotation:
Watch VIDEO. Episciences is a publishing platform for diamond open access overlay journals. The ambition is to provide scientific communities with the technical means to produce high-quality, cost-effective journals in line with FAIR principles. The process is based on open repositories (arXiv; Zenodo; HAL). All the published content of a journal is hosted on repositories. Episciences is therefore set up as a service layer for repositories, using them as input and output for open access publication. The platform was launched in 2013 as a peer review service for preprints hosted in open archives. Over the years, the list of services offered has grown and adapted to new trends in scientific publications. For example, peer review reports are now a new type of content that can be hosted in repositories, along with datasets and software code for publications. Support for these new objects in Episciences increases transparency and reproducibility of science. Support for datasets and software code also means that both data journals and software journals can be easily created on top of repositories. The platform has implemented new protocols and workflows promoted by the Confederation of Open Access Repositories (COAR) Next Generation Working Group, such as COAR Notify and SignPosting. It has enabled us to build innovative new services for researchers on top of the HAL repository. We have also connected Episciences to other open science services such as OpenAIRE Graph, OpenCitations and Scholexplorer. This has allowed us to add new services at the same time for journals and the HAL repository. The presentation will explain how Episciences and overlay journals in general can be a bridge between publications, open repositories, data and software repositories - and more broadly with open science infrastructures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie