Literatura académica sobre el tema "Scientific workflow and FAIR protocols"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Scientific workflow and FAIR protocols".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Scientific workflow and FAIR protocols"

1

Celebi, Remzi, Joao Rebelo Moreira, Ahmed A. Hassan, Sandeep Ayyar, Lars Ridder, Tobias Kuhn y Michel Dumontier. "Towards FAIR protocols and workflows: the OpenPREDICT use case". PeerJ Computer Science 6 (21 de septiembre de 2020): e281. http://dx.doi.org/10.7717/peerj-cs.281.

Texto completo
Resumen
It is essential for the advancement of science that researchers share, reuse and reproduce each other’s workflows and protocols. The FAIR principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. The question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. In this paper we describe a two-fold approach of simultaneously applying the FAIR principles to scientific workflows as well as the involved data. We apply and evaluate our approach on the case of the PREDICT workflow, a highly cited drug repurposing workflow. This includes FAIRification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. We propose a semantic model to address these specific requirements and was evaluated by answering competency questions. This semantic model consists of classes and relations from a number of existing ontologies, including Workflow4ever, PROV, EDAM, and BPMN. This allowed us then to formulate and answer new kinds of competency questions. Our evaluation shows the high degree to which our FAIRified OpenPREDICT workflow now adheres to the FAIR principles and the practicality and usefulness of being able to answer our new competency questions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yuen, Denis, Louise Cabansay, Andrew Duncan, Gary Luu, Gregory Hogue, Charles Overbeck, Natalie Perez et al. "The Dockstore: enhancing a community platform for sharing reproducible and accessible computational protocols". Nucleic Acids Research 49, W1 (12 de mayo de 2021): W624—W632. http://dx.doi.org/10.1093/nar/gkab346.

Texto completo
Resumen
Abstract Dockstore (https://dockstore.org/) is an open source platform for publishing, sharing, and finding bioinformatics tools and workflows. The platform has facilitated large-scale biomedical research collaborations by using cloud technologies to increase the Findability, Accessibility, Interoperability and Reusability (FAIR) of computational resources, thereby promoting the reproducibility of complex bioinformatics analyses. Dockstore supports a variety of source repositories, analysis frameworks, and language technologies to provide a seamless publishing platform for authors to create a centralized catalogue of scientific software. The ready-to-use packaging of hundreds of tools and workflows, combined with the implementation of interoperability standards, enables users to launch analyses across multiple environments. Dockstore is widely used, more than twenty-five high-profile organizations share analysis collections through the platform in a variety of workflow languages, including the Broad Institute's GATK best practice and COVID-19 workflows (WDL), nf-core workflows (Nextflow), the Intergalactic Workflow Commission tools (Galaxy), and workflows from Seven Bridges (CWL) to highlight just a few. Here we describe the improvements made over the last four years, including the expansion of system integrations supporting authors, the addition of collaboration features and analysis platform integrations supporting users, and other enhancements that improve the overall scientific reproducibility of Dockstore content.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zulfiqar, Mahnoor, Michael R. Crusoe, Birgitta König-Ries, Christoph Steinbeck, Kristian Peters y Luiz Gadelha. "Implementation of FAIR Practices in Computational Metabolomics Workflows—A Case Study". Metabolites 14, n.º 2 (10 de febrero de 2024): 118. http://dx.doi.org/10.3390/metabo14020118.

Texto completo
Resumen
Scientific workflows facilitate the automation of data analysis tasks by integrating various software and tools executed in a particular order. To enable transparency and reusability in workflows, it is essential to implement the FAIR principles. Here, we describe our experiences implementing the FAIR principles for metabolomics workflows using the Metabolome Annotation Workflow (MAW) as a case study. MAW is specified using the Common Workflow Language (CWL), allowing for the subsequent execution of the workflow on different workflow engines. MAW is registered using a CWL description on WorkflowHub. During the submission process on WorkflowHub, a CWL description is used for packaging MAW using the Workflow RO-Crate profile, which includes metadata in Bioschemas. Researchers can use this narrative discussion as a guideline to commence using FAIR practices for their bioinformatics or cheminformatics workflows while incorporating necessary amendments specific to their research area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Nayyar, Anand, Rudra Rameshwar y Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies". Scalable Computing: Practice and Experience 20, n.º 2 (2 de mayo de 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Texto completo
Resumen
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sinaci, A. Anil, Francisco J. Núñez-Benjumea, Mert Gencturk, Malte-Levin Jauer, Thomas Deserno, Catherine Chronaki, Giorgio Cangioli et al. "From Raw Data to FAIR Data: The FAIRification Workflow for Health Research". Methods of Information in Medicine 59, S 01 (junio de 2020): e21-e32. http://dx.doi.org/10.1055/s-0040-1713684.

Texto completo
Resumen
Abstract Background FAIR (findability, accessibility, interoperability, and reusability) guiding principles seek the reuse of data and other digital research input, output, and objects (algorithms, tools, and workflows that led to that data) making them findable, accessible, interoperable, and reusable. GO FAIR - a bottom-up, stakeholder driven and self-governed initiative - defined a seven-step FAIRification process focusing on data, but also indicating the required work for metadata. This FAIRification process aims at addressing the translation of raw datasets into FAIR datasets in a general way, without considering specific requirements and challenges that may arise when dealing with some particular types of data. Objectives This scientific contribution addresses the architecture design of an open technological solution built upon the FAIRification process proposed by “GO FAIR” which addresses the identified gaps that such process has when dealing with health datasets. Methods A common FAIRification workflow was developed by applying restrictions on existing steps and introducing new steps for specific requirements of health data. These requirements have been elicited after analyzing the FAIRification workflow from different perspectives: technical barriers, ethical implications, and legal framework. This analysis identified gaps when applying the FAIRification process proposed by GO FAIR to health research data management in terms of data curation, validation, deidentification, versioning, and indexing. Results A technological architecture based on the use of Health Level Seven International (HL7) FHIR (fast health care interoperability resources) resources is proposed to support the revised FAIRification workflow. Discussion Research funding agencies all over the world increasingly demand the application of the FAIR guiding principles to health research output. Existing tools do not fully address the identified needs for health data management. Therefore, researchers may benefit in the coming years from a common framework that supports the proposed FAIRification workflow applied to health datasets. Conclusion Routine health care datasets or data resulting from health research can be FAIRified, shared and reused within the health research community following the proposed FAIRification workflow and implementing technical architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

de Visser, Casper, Lennart F. Johansson, Purva Kulkarni, Hailiang Mei, Pieter Neerincx, K. Joeri van der Velde, Péter Horvatovich et al. "Ten quick tips for building FAIR workflows". PLOS Computational Biology 19, n.º 9 (28 de septiembre de 2023): e1011369. http://dx.doi.org/10.1371/journal.pcbi.1011369.

Texto completo
Resumen
Research data is accumulating rapidly and with it the challenge of fully reproducible science. As a consequence, implementation of high-quality management of scientific data has become a global priority. The FAIR (Findable, Accesible, Interoperable and Reusable) principles provide practical guidelines for maximizing the value of research data; however, processing data using workflows—systematic executions of a series of computational tools—is equally important for good data management. The FAIR principles have recently been adapted to Research Software (FAIR4RS Principles) to promote the reproducibility and reusability of any type of research software. Here, we propose a set of 10 quick tips, drafted by experienced workflow developers that will help researchers to apply FAIR4RS principles to workflows. The tips have been arranged according to the FAIR acronym, clarifying the purpose of each tip with respect to the FAIR4RS principles. Altogether, these tips can be seen as practical guidelines for workflow developers who aim to contribute to more reproducible and sustainable computational science, aiming to positively impact the open science and FAIR community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Albtoush, Alaa, Farizah Yunus, Khaled Almi’ani y Noor Maizura Mohamad Noor. "Structure-Aware Scheduling Methods for Scientific Workflows in Cloud". Applied Sciences 13, n.º 3 (3 de febrero de 2023): 1980. http://dx.doi.org/10.3390/app13031980.

Texto completo
Resumen
Scientific workflows consist of numerous tasks subject to constraints on data dependency. Effective workflow scheduling is perpetually necessary to efficiently utilize the provided resources to minimize workflow execution cost and time (makespan). Accordingly, cloud computing has emerged as a promising platform for scheduling scientific workflows. In this paper, level- and hierarchy-based scheduling approaches were proposed to address the problem of scheduling scientific workflow in the cloud. In the level-based approach, tasks are partitioned into a set of isolated groups in which available virtual machines (VMs) compete to execute the groups’ tasks. Accordingly, based on a utility function, a task will be assigned to the VM that will achieve the highest utility by executing this task. The hierarchy-based approach employs a look-ahead approach, in which the partitioning of the workflow tasks is performed by considering the entire structure of the workflow, whereby the objective is to reduce the data dependency between the obtained groups. Additionally, in the hierarchy-based approach, a fair-share strategy is employed to determine the share (number of VMs) that will be assigned to each group of tasks. Dividing the available VMs based on the computational requirements of the task groups provides the hierarchy-based approach the advantage of further utilizing the VMs usage. The results show that, on average, both approaches improve the execution time and cost by 27% compared to the benchmarked algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mahmoudi, Morteza, Saya Ameli y Sherry Moss. "The urgent need for modification of scientific ranking indexes to facilitate scientific progress and diminish academic bullying". BioImpacts 10, n.º 1 (25 de septiembre de 2019): 5–7. http://dx.doi.org/10.15171/bi.2019.30.

Texto completo
Resumen
Academic bullying occurs when senior scientists direct abusive behavior such as verbal insults, public shaming, isolation, and threatening toward vulnerable junior colleagues such as postdocs, graduate students and lab members. We believe that one root cause of bullying behavior is the pressure felt by scientists to compete for rankings designed to measure their scientific worth. These ratings, such as the h-index, have several unintended consequences, one of which we believe is academic bullying. Under pressure to achieve higher and higher rankings, in exchange for positive evaluations, grants and recognition, senior scientists exert undue pressure on their junior staff in the form of bullying. Lab members have little or no recourse due to the lack of fair institutional protocols for investigating bullying, dependence on grant or institutional funding, fear of losing time and empirical work by changing labs, and vulnerability to visa cancellation threats among international students. We call for institutions to reconsider their dependence on these over-simplified surrogates for real scientific progress and to provide fair and just protocols that will protect targets of academic bullying from emotional and financial distress.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ammar, Ammar, Serena Bonaretti, Laurent Winckers, Joris Quik, Martine Bakker, Dieter Maier, Iseult Lynch, Jeaphianne van Rijn y Egon Willighagen. "A Semi-Automated Workflow for FAIR Maturity Indicators in the Life Sciences". Nanomaterials 10, n.º 10 (20 de octubre de 2020): 2068. http://dx.doi.org/10.3390/nano10102068.

Texto completo
Resumen
Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ayoubi, Doaa. "Investigational drugs services pharmacists education and workflow structure." JCO Global Oncology 9, Supplement_1 (agosto de 2023): 169. http://dx.doi.org/10.1200/go.2023.9.supplement_1.169.

Texto completo
Resumen
169 Background: Per NIH U.S. National Library of Medicine, 428,103 research studies are registered globally as of September 19th, 2022. Each institution must assess its institutional readiness as the complexity and volume of clinical trial increases. Institutional readiness helps assess the capacity of an institution to adopt new technologies and policies to accept, activate, and adjust research despite the complexity of the clinical trial. Clinical research pharmacists help improve the safety and quality of the research by reviewing the scientific literature and medication-related information to develop protocols and evaluate clinical trial feasibility as part of the scientific review committee while performing supportive pharmaceutical review of the protocol, preparation, storage, dispensing, and consulting clinical coordinators and sponsors on the logistics of the trial. Investigational drug service (IDS) also oversees institutional compliance with Good Clinical Practices (GCPs) and Good Manufacturing Practices (GMPs), FDA regulations, and laws. Lack of training for pharmacists, poor safety and quality control, lack of resources, and unstandardized approach to the management of investigational drug products can lead to discrepancy, delay in care, and ultimately, harm to the patient. This study will address the deficiency of institutional readiness to conduct clinical trials with investigational agents and introduce policies and procedures needed to be followed by pharmacists or pharmacy technicians to conduct research. Methods: To explore the challenges in clinical trials and compare work efficiency, work productivity, and growth in the investigational drug service (IDS) pharmacy. This study is conducted at a single-health system compromised of 7 satellite investigational pharmacy from June 2021 to August 2022. Interventions include Epic/Beacon training, CITI training, annual compounding competencies, protocol and procedure development, and Vestigo implementation. Data for inclusion was identified from (where Dr. Ayoubi got data from) were extracted and analyzed. Results: 71% decrease in # of protocols on the priority list to amend protocols. Informatics pharmacists build 190 protocols in one year. Revenue has doubled in one year. Productivity increased as time to verify is reduced by 9.43 minutes to verify an order and time to dispense is reduced by 31.3 min. Number of clinical trials designated to each pharmacist increased from 3-4 to 20 productions per month. Number of investigational drugs ordered increased by 89 last year. Conclusions: Education and Training remain one of the main set back to clinical trials growth, and proper training and education are the pillars to conducting clinical trials in a safe manner and promote growth.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Scientific workflow and FAIR protocols"

1

Djaffardjy, Marine. "Pipelines d'Analyse Bioinformatiques : solutions offertes par les Systèmes de Workflows, Cadre de représentation et Étude de la Réutilisation". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG059.

Texto completo
Resumen
La bioinformatique est un domaine multidisciplinaire qui combine biologie, informatique et statistiques, permettant de mieux comprendre les mécanismes du vivant.Son fondement repose essentiellement sur l'analyse des données biologiques.L'émergence de nouvelles technologies, en particulier les avancées majeures dans le domaine du séquençage, a entraîné une croissance exponentielle des données, posant de nouveaux défis en matière d'analyse et de gestion des données.Pour exploiter ces données, des pipelines sont utilisés, enchaînant des outils et des processus informatiques pour conduire les analyses de manière fiable et efficace. Cependant, la crise de la reproductibilité dans la recherche scientifique souligne la nécessité de rendre les analyses reproductibles et réutilisables par des tiers.Les systèmes de workflows scientifiques ont émergé comme une solution pour rendre les pipelines plus structurés, compréhensibles et reproductibles. Les workflows décrivent des procédures en plusieurs étapes coordonnant des tâches et leurs dépendances de données. Ces systèmes aident les bioinformaticiens à concevoir et exécuter des workflows, et facilitent leur partage et réutilisation. En bioinformatique, les systèmes de workflows les plus populaires sont Galaxy, Snakemake, et Nextflow.Cependant, la réutilisation des workflows fait face à des difficultés, notamment l'hétérogénéité des systèmes de workflows, le manque d'accessibilité des workflows et le besoin de bases de données publiques de workflows. De plus, l'indexation et le développement de moteurs de recherche de workflows sont nécessaires pour faciliter la recherche et la réutilisation des workflows.Dans un premier temps, nous avons développé une méthode d'analyse des spécifications de workflows afin d'extraire plusieurs caractéristiques représentatives à partir d'un ensemble de données de workflows. Notre objectif était de proposer un cadre standard pour leur représentation, indépendamment de leur langage de spécification.Dans un second temps, nous avons sélectionné un ensemble de caractéristiques de ces workflows et les avons indexées dans une base de données relationnelle, puis dans un format structuré sémantique.Enfin, nous avons mis en place une approche pour détecter les similarités entre les workflows et les processeurs, permettant ainsi d'observer les pratiques de réutilisation adoptées par les développeurs de workflows
Bioinformatics is a multidisciplinary field that combines biology, computer science, and statistics, aiming to gain a better understanding of living mechanisms. It relies primarily on the analysis of biological data. Major technological improvements, especially sequencing technologies, gave rise to an exponential increase of data, laying out new challenges in data analysis and management.In order to analyze this data, bioinformaticians use pipelines, which chain computational tools and processes. However, the reproducibility crisis in scientific research highlights the necessity of making analyses reproducible and reusable by others.Scientific workflow systems have emerged as a solution to make pipelines more structured, understandable, and reproducible. Workflows describe procedures with multiple coordinated steps involving tasks and their data dependencies. These systems assist bioinformaticians in designing and executing workflows, facilitating their sharing and reuse. In bioinformatics, the most popular workflow systems are Galaxy, Snakemake, and Nextflow.However, the reuse of workflows faces challenges, including the heterogeneity of workflow systems, limited accessibility to workflows, and the need for public workflow databases. Additionally, indexing and developing workflow search engines are necessary to facilitate workflow discovery and reuse.In this study, we developed an analysis method for workflow specifications to extract several representative characteristics from a dataset of workflows. The goal was to propose a standardized representation framework independent of the specification language. Additionally, we selected a set of workflow characteristics and indexed them into a relational database and a structured semantic format. Finally, we established an approach to detect similarity between workflows and between processors, enabling us to observe the reuse practices adopted by workflow developers
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Scientific workflow and FAIR protocols"

1

Li, Yin, Yuyin Ma y Ziyang Zeng. "A Novel Approach to Location-Aware Scheduling of Workflows Over Edge Computing Resources". En Research Anthology on Edge Computing Protocols, Applications, and Integration, 340–53. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5700-9.ch016.

Texto completo
Resumen
Edge computing is pushing the frontier of computing applications, data, and services away from centralized nodes to the logical extremes of a network. A major technological challenge for workflow scheduling in the edge computing environment is cost reduction with service-level-agreement (SLA) constraints in terms of performance and quality-of-service requirements because real-world workflow applications are constantly subject to negative impacts (e.g., network congestions, unexpected long message delays, shrinking coverage, range of edge servers due to battery depletion. To address the above concern, we propose a novel approach to location-aware and proximity-constrained multi-workflow scheduling with edge computing resources). The proposed approach is capable of minimizing monetary costs with user-required workflow completion deadlines. It employs an evolutionary algorithm (i.e., the discrete firefly algorithm) for the generation of near-optimal scheduling decisions. For the validation purpose, the authors show that our proposed approach outperforms traditional peers in terms multiple metrics based on a real-world dataset of edge resource locations and multiple well-known scientific workflow templates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ferreira da Silva, Rafael, Tristan Glatard y Frédéric Desprez. "Self-Management of Operational Issues for Grid Computing". En Advances in Systems Analysis, Software Engineering, and High Performance Computing, 187–221. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8213-9.ch006.

Texto completo
Resumen
Science gateways, such as the Virtual Imaging Platform (VIP), enable transparent access to distributed computing and storage resources for scientific computations. However, their large scale and the number of middleware systems involved in these gateways lead to many errors and faults. This chapter addresses the autonomic management of workflow executions on science gateways in an online and non-clairvoyant environment, where the platform workload, task costs, and resource characteristics are unknown and not stationary. The chapter describes a general self-management process based on the MAPE-K loop (Monitoring, Analysis, Planning, Execution, and Knowledge) to cope with operational incidents of workflow executions. Then, this process is applied to handle late task executions, task granularities, and unfairness among workflow executions. Experimental results show how the approach achieves a fair quality of service by using control loops that constantly perform online monitoring, analysis, and execution of a set of curative actions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Martínez-García, Alicia, Giorgio Cangioli, Catherine Chronaki, Matthias Löbe, Oya Beyan, Anthony Juehne y Carlos Luis Parra-Calderón. "FAIRness for FHIR: Towards Making Health Datasets FAIR Using HL7 FHIR". En MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation. IOS Press, 2022. http://dx.doi.org/10.3233/shti220024.

Texto completo
Resumen
Medical data science aims to facilitate knowledge discovery assisting in data, algorithms, and results analysis. The FAIR principles aim to guide scientific data management and stewardship, and are relevant to all digital health ecosystem stakeholders. The FAIR4Health project aims to facilitate and encourage the health research community to reuse datasets derived from publicly funded research initiatives using the FAIR principles. The ‘FAIRness for FHIR’ project aims to provide guidance on how HL7 FHIR could be utilized as a common data model to support the health datasets FAIRification process. This first expected result is an HL7 FHIR Implementation Guide (IG) called FHIR4FAIR, covering how FHIR can be used to cover FAIRification in different scenarios. This IG aims to provide practical underpinnings for the FAIR4Health FAIRification workflow as a domain-specific extension of the GoFAIR process, while simplifying curation, advancing interoperability, and providing insights into a roadmap for health datasets FAIR certification.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Scientific workflow and FAIR protocols"

1

Menager, H. y Z. Lacroix. "A Workflow Engine for the Execution of Scientific Protocols". En 22nd International Conference on Data Engineering Workshops (ICDEW'06). IEEE, 2006. http://dx.doi.org/10.1109/icdew.2006.24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Koderi, Harikrishnan, Vladimirs Šatrevičs y Irina Voronova. "IDENTIFICATION OF HUMAN FACTORS AND USER EXPERIENCE IN A REMOTE ENVIRONMENT". En 12th International Scientific Conference „Business and Management 2022“. Vilnius Gediminas Technical University, 2022. http://dx.doi.org/10.3846/bm.2022.737.

Texto completo
Resumen
The pandemic had caused a worldwide disruption introducing new and extraordinary challenges. Social dis-tancing and new protocols ensuring safety for people derived new models of work environments. Moreover, when we deal with our physical health, introducing new ways to interact and work in this new remote covid workflow it is also essential to take care of our mental health. Globally, due to the new adjusted routines in all aspects had opened a new remote world. The research identified human factors and user experience influencing the remote environments, there is a significant negative relation between stress and user experience while working in a remote environment. High stress levels result in poor user experience. Moreover, the findings also reveal us Human interface in a remote set up is bring-ing the most dissatisfaction and contributes to stress in a human-machine level. Furthermore, the different aspects of stress were also categorised and identified in the study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bakota, Boris. "EUROPEAN COURT OF HUMAN RIGHTS AND THE EUROPEAN GREEN DEAL". En International Scientific Conference “Digitalization and Green Transformation of the EU“. Faculty of Law, Josip Juraj Strossmayer University of Osijek, 2023. http://dx.doi.org/10.25234/eclic/27448.

Texto completo
Resumen
The European Green Deal aims to make Europe the first climate-neutral continent by 2050 and maps a new and inclusive growth strategy to boost the economy, improve people’s health and quality of life, care for nature, etc. EU Farm to Fork Strategy for fair, healthy and environmentally- friendly food system, among others, asks for „moving to a more plant-based diet“. Plant-based diet is a diet consisting mostly or entirely of plant-based foods. Plant-based diet does not exclude meat or dietary products totally, but the emphasis should be on plants. Vegetarianism is the practice of abstaining from the meat consumption. Vegetarians consume eggs dairy products and honey. Veganism is the practice of abstaining from the use of animal product in diet and an associated philosophy that rejects the commodity status of animals. Article 9 of European Convention for the Protection of Human Rights and Fundamental Freedoms and article 10 of the Charter of Fundamental Rights of the European Union almost use the same text enshrining Freedom of thought, conscience and religion. To ensure the observance and engagements in the Convention and the Protocols, Council of Europe set up European Court of Human Rights. All European Union Member States are parties to the European Convention for the Protection of Human Rights and Fundamental Freedoms. European Court of Human Rights had many cases dealing with above-mentioned article 9. This paper will focus on Court’s cases dealing with veganism, vegetarianism and plant-based diet. It will investigate obligations, which arise from European Convention for the Protection of Human Rights and Fundamental Freedoms to public administration institutions, namely hospitals, prisons, army, school and university canteens, etc. The paper will explore the practice of several European countries and Croatia. The results will show if veganism, vegetarianism and EU promoted plant-based diet are equally protected under European Convention or there are differences, and what differences if there are any.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

van Mastrigt, Pieter y Michael J. Quinn. "Reducing Uncertainties to Shape the Future of Exploration". En International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21339-ms.

Texto completo
Resumen
Abstract For any given Exploration oil and gas portfolio and associated opportunities, successful business decisions can only be made on the basis of technically robust estimates of the subsurface risk versus the resource potential and estimate of the associated upside(s). Ideally, these estimates should incorporate the entire spectrum of opportunities for the complete portfolio and they should be made in a consistent and comparable way. However, as Explorers, we are often faced with data that is incomplete, limited, of variable quality, and/or inconsistent. As a result, subsurface evaluation may be seen more as educated guessing rather than a robust science of evaluation grounded in facts and defensible logic. In their January 2020 paper "Randomness, serendipity, and luck in petroleum exploration" authors Milkov and Navidi (Ref 1.) go quite a bit further and demonstrate that luck is a significant factor in the exploration success equation. They also showed a general lack in long-term consistency in exploration results of individual companies. Indeed, after we looked at PETRONAS’ own historical POSg versus actual technical success rates and observed only a fair to poor relation between actual technical success rate and the pre-drill POSg estimate. A similar—albeit less worrisome—observation was made for volume ranges and fluid phase predictions. Clearly, there is, and always has been, a phenomenal challenge for the Petroleum Geoscientists to provide the sought after estimates as accurately as possible, and we were no exception. In a bid to improve on this PETRONAS set out on a more disciplined approach to characterizing subsurface uncertainties on its conventional exploration efforts. Over time, more accurate characterizations of such exploration risk and resource potential have followed from a series of procedural guidelines, enhanced capability training program and careful governance of exploration workflows. With improved capabilities and workflow consistency, our evaluation teams have delivered substantially better subsurface evaluations. This in turn has led to more confident decision-making on e.g., individual drilling decisions and new play entries. While our newly implemented workflows are not groundbreaking in isolation, in combination they have delivered notable success and an improved ability to shape the future growth for PETRONAS. In this article, we will highlight the main contributing changes and demonstrate that the overall improvements are indeed impactful. We are not directly challenging the article of Milkov et. al., but are convinced that professionalism and scientific discipline is the deciding factor in the Exploration success equation.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Scientific workflow and FAIR protocols"

1

Kopte, Robert. OSADCP Toolbox. GEOMAR, 2024. http://dx.doi.org/10.3289/sw_2_2024.

Texto completo
Resumen
Vessel-mounted Acoustic Doppler Current Profilers (ADCPs) provide velocity profiles of the upper ocean along the ship track. They are a key tool in oceanographic research to study the oceanic circulation and the associated distribution of mass, heat, contaminants and other tracers. In order to obtain high-quality ocean current data from vessel-mounted ADCP measurements, a number of requirements must be met, from system installation and data acquisition measures to certain essential processing steps. Here, we collect key points on ADCP data acquisition in general and on the characteristics and requirements of vessel-mounted deployments. We summarize general post-processing guidelines and present an open-source Python toolbox called OSADCP for scientists to convert, clean, calibrate and organize binary raw vessel-mounted ADCP data for scientific use. The toolbox is designed to process ADCP measurements in deep water by Teledyne RDI Ocean Surveyor ADCPs and the data acquisition software VMDAS. An extended version of OSADCP is continuously developed as part of a data management project for the German oceanographic research fleet. The corresponding workflow was designed to ensure a standardized and reliable ADCP data transfer from the sensor to the repository. It is described here as one example for scientific data management that follows FAIR data guidelines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía