Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Versioning tools.

Zeitschriftenartikel zum Thema „Versioning tools“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-30 Zeitschriftenartikel für die Forschung zum Thema "Versioning tools" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Brahmia, Zouhaier, Fabio Grandi, Barbara Oliboni und Rafik Bouaziz. „Schema Change Operations for Full Support of Schema Versioning in the τXSchema Framework“. International Journal of Information Technology and Web Engineering 9, Nr. 2 (April 2014): 20–46. http://dx.doi.org/10.4018/ijitwe.2014040102.

Der volle Inhalt der Quelle
Annotation:
tXSchema (Currim et al., 2004) is a framework (a language and a suite of tools) for the creation and validation of time-varying XML documents. A tXSchema schema is composed of a conventional XML Schema annotated with physical and logical annotations. All components of a tXSchema schema can evolve over time to reflect changes in the real-world. Since many applications need to keep track of both data and schema evolution, schema versioning has been long advocated to be the best solution to do this. In this paper, we complete the tXSchema framework, which is predisposed from the origin to support schema versioning, with the definition of the operations which are necessary to exploit such feature and make schema versioning functionalities available to final users. Moreover, we propose a new technique for schema versioning in tXSchema, allowing a complete and safe management of schema changes. It supports both versioning of conventional schema and versioning of annotations, in an integrated manner. For each component of a tXSchema schema, our approach provides a complete and sound set of change primitives and a set of high-level change operations, for the maintenance of such a component and defines their operational semantics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eriksson, Helen, und Lars Harrie. „Versioning of 3D City Models for Municipality Applications: Needs, Obstacles and Recommendations“. ISPRS International Journal of Geo-Information 10, Nr. 2 (28.01.2021): 55. http://dx.doi.org/10.3390/ijgi10020055.

Der volle Inhalt der Quelle
Annotation:
The use of 3D city models is changing from visualization to complex use cases where they act as 3D base maps. This requires links to registers and continuous updating of the city models. Still, most models never change or are recreated instead of updated. This study identifies obstacles to version management of 3D city models and proposes recommendations to overcome them, with a main focus on the municipality perspective, foremost in the planning and building processes. As part of this study, we investigate whether national building registers can control the version management of 3D city models. A case study based on investigations of standards, interviews and a review of tools is presented. The study uses an architectural model divided into four layers: data collection, building theme, city model and application. All layers require changes when implementing a new versioning method: the data collection layer requires restructuring of technical solutions and work processes, storage of the national building register requires restructuring, versioning capabilities must be propagated to the city model layer, and tools at the application layer must handle temporal information better. Strong incentives for including versioning in 3D city models are essential, as substantial investment is required to implement versioning in all the layers. Only capabilities required by applications should be implemented, as the complexity grows with the number of versioning functionalities. One outcome of the study is a recommendation to link 3D city models more closely to building registers. This enables more complex use in, e.g., building permits and 3D cadastres, and authorities can fetch required (versioning) information directly from the city model layer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

S. Ellouze, A., A. Jmal und R. Bouaziz. „Service Oriented Tools for Medical Records Management and Versioning“. American Journal of Bioinformatics Research 2, Nr. 4 (09.08.2012): 33–39. http://dx.doi.org/10.5923/j.bioinformatics.20120204.01.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (21.06.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.1.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (22.11.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.2.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (05.12.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.3.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rashid, Junaid, Waqar Mehmood und Muhammad Wasif Nisar. „A Survey of Model Comparison Strategies and Techniques in Model Driven Engineering“. International Journal of Software Engineering and Technologies (IJSET) 1, Nr. 3 (01.12.2016): 165. http://dx.doi.org/10.11591/ijset.v1i3.4579.

Der volle Inhalt der Quelle
Annotation:
This Survey paper shows the recent state of model comparison as it’s applies to Model Driven engineering. In Model Driven Engineering to calculate the difference between the models is a very important and challenging task. There are number of tasks involved in Model differencing that firstly starts with identifying and matching the elements of the model. In this paper we discuss how model matching is accomplished, the strategies, techniques and the types of the model. In this paper we also discuss the future direction. We find out that many of the latest model comparison strategies are geared near enabling Meta model and similarity based matching. Therefore model versioning is the most dominant application of the model comparison. Recently to work on comparison for versioning has begun to deteriorate, giving way to different applications. Ultimately there is wide change among the tools in the measure of client exertion needed to perform model comparisons, as some require more push to encourage more sweeping statement and expressive force.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

RILLING, JUERGEN, RENÉ WITTE, PHILIPP SCHUEGERL und PHILIPPE CHARLAND. „BEYOND INFORMATION SILOS — AN OMNIPRESENT APPROACH TO SOFTWARE EVOLUTION“. International Journal of Semantic Computing 02, Nr. 04 (Dezember 2008): 431–68. http://dx.doi.org/10.1142/s1793351x08000567.

Der volle Inhalt der Quelle
Annotation:
Nowadays, software development and maintenance are highly distributed processes that involve a multitude of supporting tools and resources. Knowledge relevant for a particular software maintenance task is typically dispersed over a wide range of artifacts in different representational formats and at different abstraction levels, resulting in isolated 'information silos'. An increasing number of task-specific software tools aim to support developers, but this often results in additional challenges, as not every project member can be familiar with every tool and its applicability for a given problem. Furthermore, historical knowledge about successfully performed modifications is lost, since only the result is recorded in versioning systems, but not how a developer arrived at the solution. In this research, we introduce conceptual models for the software domain that go beyond existing program and tool models, by including maintenance processes and their constituents. The models are supported by a pro-active, ambient, knowledge-based environment that integrates users, tasks, tools, and resources, as well as processes and history-specific information. Given this ambient environment, we demonstrate how maintainers can be supported with contextual guidance during typical maintenance tasks through the use of ontology queries and reasoning services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

ONOMA, A. K., H. SUGANUMA, M. POONAWALA, S. SUBRAMANIAN, W. T. TSAI und T. SYOMURA. „AN OBJECT-BASED ENVIRONMENT (OPUSDEI) FOR SOFTWARE DEVELOPMENT AND MAINTENANCE“. International Journal on Artificial Intelligence Tools 05, Nr. 04 (Dezember 1996): 447–71. http://dx.doi.org/10.1142/s0218213096000262.

Der volle Inhalt der Quelle
Annotation:
This paper discusses an object-based software development and maintenance environment, Opusdei, built and used for several years at Hitachi Software Engineering (HSK - Since 1994, University of Minnesota has been involved in the Opusdei project.) Industrial software is usually large, has many versions, undergoes frequent changes, and is developed concurrently by multiple programmers. Opusdei was designed to handle various problems inherent in such industrial environments. In Opusdei, all information needed for development is stored using an uniform representation in a central repository, and the various documentation and views of the software artifacts can be generated automatically using the tool repository. Opusdeis’ innovative capabilities are 1) uniform software artifacts representation 2) inter-relation and traceability maintenance among software artifacts 3) tools coordination and tool integration using tool composition scenarios 4) automatic documentation and versioning control. Tool coordination and composition has been discussed in the literature as a possible way to make software development environments more intelligent. Opusdei provides a uniform representation of software artifacts and tools which is an essential first step in addressing the issues of tool coordination and composition. Opusdei has been operational for several years and has been used in many large software development projects. The productivity gain reported for some of these projects, by using Opusdei ranged from 50–90%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Jennings-Antipov, Laura D., und Timothy S. Gardner. „Digital publishing isn't enough: the case for ‘blueprints’ in scientific communication“. Emerging Topics in Life Sciences 2, Nr. 6 (21.12.2018): 755–58. http://dx.doi.org/10.1042/etls20180165.

Der volle Inhalt der Quelle
Annotation:
Since the time of Newton and Galileo, the tools for capturing and communicating science have remained conceptually unchanged — in essence, they consist of observations on paper (or electronic variants), followed by a ‘letter’ to the community to report your findings. These age-old tools are inadequate for the complexity of today's scientific challenges. If modern software engineering worked like science, programmers would not share open source code; they would take notes on their work and then publish long-form articles about their software. Months or years later, their colleagues would attempt to reproduce the software based on the article. It sounds a bit silly, and yet even, this level of prose-based methodological discourse has deteriorated in science communication. Materials and Methods sections of papers are often a vaguely written afterthought, leaving researchers baffled when they try to repeat a published finding. It's time for a fundamental shift in scientific communication and sharing, a shift akin to the advent of computer-aided design and source code versioning. Science needs reusable ‘blueprints’ for experiments replete with the experiment designs, material flows, reaction parameters, data, and analytical procedures. Such an approach could establish the foundations for truly open source science where these scientific blueprints form the digital ‘source code’ for a supply chain of high-quality innovations and discoveries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Sinaci, A. Anil, Francisco J. Núñez-Benjumea, Mert Gencturk, Malte-Levin Jauer, Thomas Deserno, Catherine Chronaki, Giorgio Cangioli et al. „From Raw Data to FAIR Data: The FAIRification Workflow for Health Research“. Methods of Information in Medicine 59, S 01 (Juni 2020): e21-e32. http://dx.doi.org/10.1055/s-0040-1713684.

Der volle Inhalt der Quelle
Annotation:
Abstract Background FAIR (findability, accessibility, interoperability, and reusability) guiding principles seek the reuse of data and other digital research input, output, and objects (algorithms, tools, and workflows that led to that data) making them findable, accessible, interoperable, and reusable. GO FAIR - a bottom-up, stakeholder driven and self-governed initiative - defined a seven-step FAIRification process focusing on data, but also indicating the required work for metadata. This FAIRification process aims at addressing the translation of raw datasets into FAIR datasets in a general way, without considering specific requirements and challenges that may arise when dealing with some particular types of data. Objectives This scientific contribution addresses the architecture design of an open technological solution built upon the FAIRification process proposed by “GO FAIR” which addresses the identified gaps that such process has when dealing with health datasets. Methods A common FAIRification workflow was developed by applying restrictions on existing steps and introducing new steps for specific requirements of health data. These requirements have been elicited after analyzing the FAIRification workflow from different perspectives: technical barriers, ethical implications, and legal framework. This analysis identified gaps when applying the FAIRification process proposed by GO FAIR to health research data management in terms of data curation, validation, deidentification, versioning, and indexing. Results A technological architecture based on the use of Health Level Seven International (HL7) FHIR (fast health care interoperability resources) resources is proposed to support the revised FAIRification workflow. Discussion Research funding agencies all over the world increasingly demand the application of the FAIR guiding principles to health research output. Existing tools do not fully address the identified needs for health data management. Therefore, researchers may benefit in the coming years from a common framework that supports the proposed FAIRification workflow applied to health datasets. Conclusion Routine health care datasets or data resulting from health research can be FAIRified, shared and reused within the health research community following the proposed FAIRification workflow and implementing technical architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Lassoued, Yosra, Lotfi Bouzguenda und Tariq Mahmoud. „Context-Aware Business Process Versions Management“. International Journal of e-Collaboration 12, Nr. 3 (Juli 2016): 7–33. http://dx.doi.org/10.4018/ijec.2016070102.

Der volle Inhalt der Quelle
Annotation:
This work deals with a very active and promising research area that is the Business Process (BP) flexibility. One possible way to deal with this flexibility is the conjoint use of versioning and contextualization techniques. Versioning permits BP evolution by supporting the alternative use of BP versions. Contextualization ensures the definition of use conditions of BP versions to help the designer choosing a version among several ones. In a previous work, BP flexibility had been addressed using only versioning technique by considering the informational, organizational and process perspectives. In this work, the authors show how they conjointly use both versioning and contextualization techniques to address the BP flexibility. More precisely, they propose an extension of the VBP2M meta-model (Versioned Business Process Meta Model), introduced in their previous work, by adding a contextual perspective that offers two levels of contexts granularity (local and global). This perspective is illustrated using a well-known case study «automatic production process of mineral water bottle ». An extension of the VBPQL language (Versioned Business Process Query Language) is also introduced to allow the definition, manipulating and querying of BP versions' contexts. Furthermore, the authors propose an ontology-based method to select the appropriate BP version. Finally, they present our tool that represents the implementation of the proposed solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bellini, Pierfrancesco, Ivan Bruno, Paolo Nesi und Nadia Rauch. „Graph databases methodology and tool supporting index/store versioning“. Journal of Visual Languages & Computing 31 (Dezember 2015): 222–29. http://dx.doi.org/10.1016/j.jvlc.2015.10.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Özkan, Deniz, und Alok Mishra. „Agile Project Management Tools: A Brief Comprative View“. Cybernetics and Information Technologies 19, Nr. 4 (01.11.2019): 17–25. http://dx.doi.org/10.2478/cait-2019-0033.

Der volle Inhalt der Quelle
Annotation:
Abstract Agile methodologies are becoming popular in software development. Managers are required to understand project’s progress and product quality without development documents. During Agile practices of the teams and organizations, Agile project management tools are frequently used. The use of such tools leads to achieving speed and efficiency, affects the quality of the software. The quality of final product is mostly related with to project management. Accordingly, the paper provides brief comparative perspective about the popular project management tools for agile projects. 16 popular Agile project management tools have been presented helping agile developers to plan and manage their tasks in an efficient manner. Taiga, Axosoft, Agielan, Planbox are more appropriate for start-up projects. The most twitted and most appreciated tools are reported as Jira, Trello, and VersionOne. SpiraTeam by Inflectra and Pivotal Tracker are other pricing and popular agile tools, providing flexibility to Agile developers and increase collaboration among team members.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Jaziri, Wassim, Najla Sassi und Dhouha Damak. „Using Temporal Versioning and Integrity Constraints for Updating Geographic Databases and Maintaining Their Consistency“. Journal of Database Management 26, Nr. 1 (Januar 2015): 30–59. http://dx.doi.org/10.4018/jdm.2015010102.

Der volle Inhalt der Quelle
Annotation:
The use of geographic data has become a widespread concern, mainly within applications related to spatial planning and spatial decision-making. Therefore, changing environments require databases adaptable to changes that occur over time. Thus, supporting geographic information evolution is essential and extremely important within changing environments. The evolution is expressed in the geographic database by series of update operations that should maintain its consistency. This paper proposes an approach for updating geographic databases, based on update operators and algorithms of constraints integrity checking. Temporal versioning is used to keep the track of changes. Every version presents the state of the geographic database at a given time. Algorithms of constraints integrity checking allow maintaining the database consistency upon its update. To implement our approach and assist users in the evolution process, the GeoVersioning tool is developed and tested on a sample geographic database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Willighagen, Lars G. „Citation.js: a format-independent, modular bibliography tool for the browser and command line“. PeerJ Computer Science 5 (12.08.2019): e214. http://dx.doi.org/10.7717/peerj-cs.214.

Der volle Inhalt der Quelle
Annotation:
Background Given the vast number of standards and formats for bibliographical data, any program working with bibliographies and citations has to be able to interpret such data. This paper describes the development of Citation.js (https://citation.js.org/), a tool to parse and format according to those standards. The program follows modern guidelines for software in general and JavaScript in specific, such as version control, source code analysis, integration testing and semantic versioning. Results The result is an extensible tool that has already seen adaption in a variety of sources and use cases: as part of a server-side page generator of a publishing platform, as part of a local extensible document generator, and as part of an in-browser converter of extracted references. Use cases range from transforming a list of DOIs or Wikidata identifiers into a BibTeX file on the command line, to displaying RIS references on a webpage with added Altmetric badges to generating ”How to cite this” sections on a blog. The accuracy of conversions is currently 27% for properties and 60% for types on average and a typical initialization takes 120 ms in browsers and 1 s with Node.js on the command line. Conclusions Citation.js is a library supporting various formats of bibliographic information in a broad selection of use cases and environments. Given the support for plugins, more formats can be added with relative ease.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Coetzee, Simon G., Zachary Ramjan, Huy Q. Dinh, Benjamin P. Berman und Dennis J. Hazelett. „StateHub-StatePaintR: rapid and reproducible chromatin state evaluation for custom genome annotation“. F1000Research 7 (22.02.2018): 214. http://dx.doi.org/10.12688/f1000research.13535.1.

Der volle Inhalt der Quelle
Annotation:
Genome annotation is critical to understand the function of disease variants, especially for clinical applications. To meet this need there are segmentations available from public consortia reflecting varying unsupervised approaches to functional annotation based on epigenetics data, but there remains a need for transparent, reproducible, and easily interpreted genomic maps of the functional biology of chromatin. We introduce a new methodological framework for defining a combinatorial epigenomic model of chromatin state on a web database, StateHub. In addition, we created an annotation tool for bioconductor, StatePaintR, which accesses these models and uses them to rapidly (on the order of seconds) produce chromatin state segmentations in standard genome browser formats. Annotations are fully documented with change history and versioning, authorship information, and original source files. StatePaintR calculates ranks for each state from next-gen sequencing peak statistics, facilitating variant prioritization, enrichment testing, and other types of quantitative analysis. StateHub hosts annotation tracks for major public consortia as a resource, and allows users to submit their own alternative models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Coetzee, Simon G., Zachary Ramjan, Huy Q. Dinh, Benjamin P. Berman und Dennis J. Hazelett. „StateHub-StatePaintR: rapid and reproducible chromatin state evaluation for custom genome annotation“. F1000Research 7 (07.05.2020): 214. http://dx.doi.org/10.12688/f1000research.13535.2.

Der volle Inhalt der Quelle
Annotation:
Genome annotation is critical to understand the function of disease variants, especially for clinical applications. To meet this need there are segmentations available from public consortia reflecting varying unsupervised approaches to functional annotation based on epigenetics data, but there remains a need for transparent, reproducible, and easily interpreted genomic maps of the functional biology of chromatin. We introduce a new methodological framework for defining a combinatorial epigenomic model of chromatin state on a web database, StateHub. In addition, we created an annotation tool for bioconductor, StatePaintR, which accesses these models and uses them to rapidly (on the order of seconds) produce chromatin state segmentations in standard genome browser formats. Annotations are fully documented with change history and versioning, authorship information, and original source files. StatePaintR calculates ranks for each state from next-gen sequencing peak statistics, facilitating variant prioritization, enrichment testing, and other types of quantitative analysis. StateHub hosts annotation tracks for major public consortia as a resource, and allows users to submit their own alternative models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Pistofidis, Petros, Christos Emmanouilidis, Aggelos Papadopoulos und Pantelis N. Botsaris. „Management of linked knowledge in industrial maintenance“. Industrial Management & Data Systems 116, Nr. 8 (12.09.2016): 1741–58. http://dx.doi.org/10.1108/imds-10-2015-0409.

Der volle Inhalt der Quelle
Annotation:
Purpose Field expertise in industry is often poorly recorded and unexploited. The purpose of this paper is to introduce a methodology and tool that incorporates a knowledge validation loop to leverage upon human-contributed field observations in industrial maintenance management. Starting from a failure mode, effects and criticality analysis (FMECA) model, it defines a collaborative process that links FMECA knowledge with field maintenance practice. Design/methodology/approach A metadata management system is designed to encourage staff involvement in enriching knowledge with field observations. The process supports easy feedback and collaborative annotation and is pilot tested via an industrial case study. Findings Streamlining FMECA validation is welcomed by maintenance staff, empowering them to exert more control over the management, usage and versioning of reference knowledge. Research limitations/implications The methodology for metadata management in industrial maintenance enables staff participation in a collaborative knowledge enrichment process. Metadata management is a pre-cursor and therefore an important step to drive future analytics. Practical implications Industry personnel are more inclined to contribute to organisational knowledge if the process is based on reference knowledge and requires minimal interaction. Social implications Facilitating individual contribution to collective knowledge strengthens the sense that each staff member can have organisational impact. Originality/value The paper introduces a methodology and tool to stimulate human-contributed knowledge in industrial maintenance, strengthening collaborative organisation knowledge flows.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Bertrand, D., D. De Cock, V. Stouten, S. Pazmino, A. Moeyersoons, J. Joly, R. Westhovens und P. Verschueren. „SAT0028 THE FLARE-RA QUESTIONNAIRE CAN PREDICT FLARES IN PATIENTS WITH ESTABLISHED RHEUMATOID ARTHRITIS PARTICIPATING IN THE TAPERA TRIAL“. Annals of the Rheumatic Diseases 79, Suppl 1 (Juni 2020): 944.3–944. http://dx.doi.org/10.1136/annrheumdis-2020-eular.3644.

Der volle Inhalt der Quelle
Annotation:
Background:The Flare assessment in Rheumatoid Arthritis (FLARE-RA) questionnaire was developed to identify Rheumatoid Arthritis (RA) flares, but it is unknown if this questionnaire can also predict flares.Objectives:To identify if the FLARE-RA questionnaire has a predictive capacity for OMERACT flares in patients with established RA participating in a tapering trial.Methods:Patients, participating in the 1-year open-label pragmatic randomised controlled TapERA (Tapering Etanercept in RA) trial, were included in the analysis. Patients had to be in DAS28CRP or ESR remission (≥6 months) and treated with etanercept 50 mg weekly (≥1 year). Participants were randomised to continue etanercept 50 mg weekly or to taper to 50 mg every other week.The FLARE-RA questionnaire was completed every 3 months in the trial. This questionnaire consists of 13 questions on a Likert-scale from 1 to 6 reflecting ‘completely untrue’ to ‘completely true’. Validation by Fautrel et al. leaded to elimination of 2 questions (‘steroid intake’ and ‘overall worsening of RA’) and rescaling to 0-10. Our outcomes were based on these 3 versions of the questionnaire, namely 13 questions (13q), 11 questions (11q) and rescaled 11 questions (r11q). The FLARE-RA questionnaire can be divided in 2 subscales: the FLARE-RA arthritis subscale (questions regarding morning stiffness, night disturbances, joint swelling, joint pain, analgesics) and FLARE-RA general symptoms subscale (questions regarding fatigue, functional limitation, irritability, mood disturbances, withdrawal, need for help).The total FLARE-RA score was calculated by taking the average of all the questions per time point. A flare was defined according to the OMERACT definition, namely an increase in DAS28CRP > 1.2 compared to baseline or increase in DAS28CRP > 0.6 and current DAS28CRP ≥ 3.2. All the total FLARE-RA scores of the baseline, month 3, 6 and 9 visit were grouped and the mean ± standard deviation (SD) FLARE-RA score was compared between patients with or without an OMERACT flare on the next study visit using the Mann-Whitney U test. Logistic regressions using the total FLARE-RA score to predict an OMERACT flare 3 months later were carried out for the 13q, 11q and r11q versions and the FLARE-RA subscales. Missing data were imputed using expectation maximisation.Results:Sixty-six patients (68% female, mean ± SD age of 55 ± 11 years) completed the FLARE-RA questionnaire. This yielded 264 FLARE-RA scores, of which the total mean ± SD FLARE-RA score was 2.1 ± 1.0 and 2.7 ± 1.1 for patients without and with an OMERACT flare on the next study visit, respectively (p<0.01). This was comparable for the 11q and r11q versions (Table 1). For the total FLARE-RA score (13q), the odds ratio of having an OMERACT flare 3 months later is 1.6 (95% confidence interval (CI) 1.2 – 2.2, p=0.004). This was 1.5 (95% CI 1.1 – 2.1, p=0.006) for the 11q and 1.2 (95% CI 1.1 – 1.4, p=0.006) for the r11q version. The odds ratio of having an OMERACT flare on the next visit was 1.5 (95% CI 1.2 – 2.0, p=0.002) and 1.4 (95% CI 1.0 – 2.0, p=0.025) for the arthritis and general symptoms subscale, respectively.Table 1.Comparison of overall total FLARE-RA scores between patients with or without an OMERACT flare on the next visitQuestionnaire versionNo OMERACT flare on next visitOMERACT flare on next visitP-valueOverall total FLARE-RA score(mean ± SD)13q2.1 ± 1.02.7 ± 1.10.00211q2.2 ± 1.12.7 ± 1.10.004r11q2.3 ± 2.13.4 ± 2.20.004Overall total FLARE-RA score was derived by grouping the total FLARE-RA scores of the baseline, month 3, 6 and 9 visit.Conclusion:Higher total FLARE-RA questionnaire scores seem to indicate a higher risk of an OMERACT flare 3 months later, regardless of which versions or subscales of the FLARE-RA questionnaire were used. Hence, our findings suggest that the FLARE-RA questionnaire could be used as a predictive tool for flares.Disclosure of Interests:Delphine Bertrand: None declared, Diederik De Cock: None declared, Veerle Stouten: None declared, Sofia Pazmino: None declared, Anneleen Moeyersoons: None declared, Johan Joly: None declared, Rene Westhovens Grant/research support from: Celltrion Inc, Galapagos, Gilead, Consultant of: Celltrion Inc, Galapagos, Gilead, Speakers bureau: Celltrion Inc, Galapagos, Gilead, Patrick Verschueren Grant/research support from: Pfizer unrestricted chair of early RA research, Speakers bureau: various companies
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Rocha Prado, Laura. „A Set of Simple Tools For Assembling, Annotating, Versioning and Publishing Taxonomies“. Biodiversity Information Science and Standards 5 (16.09.2021). http://dx.doi.org/10.3897/biss.5.75344.

Der volle Inhalt der Quelle
Annotation:
Biodiversity data publishers rely on virtually assembled taxonomic hierarchies to structure their data, with operational units involving scientific names, nomenclatural acts and taxonomic trees. The main goal for the majority of biodiversity aggregators, databases, and software developed specifically for managing scientific names, biological samples and other occurrences has been to establish a single, unified biological classification, to serve as their structural "taxonomic backbone." Resources to produce and publish biological classifications digitally are thus, typically restricted to those generating unified taxonomic backbones, leaving individual researchers and decentralized communities with few options to assemble, visualize, version and disseminate multiple taxonomies online. To aid the creation of a culture of assembling, annotating, versioning, and publishing taxonomies online, and to help users interested in taxonomic classifications that lack digital communities, the development of a set of modular and independent tools is proposed, based on the following complementary features: A web application to serve as the taxonomy curator (referred to as the Curator) A web application to serve as the optional taxonomic database and information provider (referred to as the Aggregator) A web application to serve as the taxonomy curator (referred to as the Curator) A web application to serve as the optional taxonomic database and information provider (referred to as the Aggregator) These tools are being designed and built following modern software development standards, in a modular architecture consisting of front-end clients, databases, and back-end applications, with the provision for a public Application Programming Interface (API) that will make data available for any interested parties and can be potentially integrated into large-scale projects like the Global Biodiversity Information Facility (GBIF), Integrated Digitized Biocollections (iDigBio), Symbiota (Gries et al. 2014), and Plazi (Agosti and Egloff 2009). Curator tool The Curator tool will be a publicly accessible front-end web application, with which users can assemble, curate, and export taxonomies. The primary focus is to support the user-preferred taxonomy generation, with manual inputs and optional annotations of the resulting product. Users can pick between three modes of taxonomy assembly: manual mode with assisted taxon search, automated generation from an online source, and automated generation from a file upload. Taxonomies can be edited and annotated as necessary. manual mode with assisted taxon search, automated generation from an online source, and automated generation from a file upload. Taxonomies can be edited and annotated as necessary. Once a user is satisfied with their taxonomy, they can save it in one or all of the available formats for exporting and external usage (common formats include, among others, JSON (JavaScript Object Notation), CSV (comma-separated values), and XML). Logged in users can also opt to save the taxonomy in the Aggregator database, which will make the taxonomy publicly available. Ideally, all fields in the Curator forms should correspond to terms included in the Darwin Core standard (Wieczorek et al. 2012) or Plazi’s TaxonX schema (Agosti and Egloff 2009) (for hierarchies available in published treatments). Aggregator tool The Aggregator tool will communicate with the database and will provide users with a number of functionalities, such as: Store and publish versioned taxonomies generated with the Curator API endpoints for automation (JSON/XML formats/CSV download) Optional unique identifier/DOI generation for published taxonomies Search engine with user-friendly interface as well as API endpoint for querying the database Store and publish versioned taxonomies generated with the Curator API endpoints for automation (JSON/XML formats/CSV download) Optional unique identifier/DOI generation for published taxonomies Search engine with user-friendly interface as well as API endpoint for querying the database The possibility of making taxonomies available as an API endpoint, as well as exporting taxonomies in different formats, will ensure that this tool behaves as a taxonomic source that can be used by virtually any interested party or application. The tools are being modelled as a decentralized community resource that can be used for any or all taxonomic groups and, as such, its scale and impact will be driven by bottom-up community use. The goal is not to provide extensive coverage of all biological organisms, but rather to provide an open digital toolkit and space for biodiversity researchers and projects that lack access to open, structured, online taxonomic publication venues and dedicated tools. Practical examples of usage for these tools include: A user generates multiple taxonomic concepts for organisms they are studying, which can then be queried and analyzed by scripts that make taxonomic alignments to compare different scientific hypotheses throughout time; An institution wants to publish a regional Symbiota portal to manage specimens in a particular collection, so they establish an annotated working taxonomic backbone with the Curator that Symbiota will then be able to ingest before samples can be imported into the portal; A researcher wants to export a biodiversity portal taxonomy at a given moment and wants to annotate and publish this version in an upcoming paper to establish scientific baselines for proper taxonomic communication. A user generates multiple taxonomic concepts for organisms they are studying, which can then be queried and analyzed by scripts that make taxonomic alignments to compare different scientific hypotheses throughout time; An institution wants to publish a regional Symbiota portal to manage specimens in a particular collection, so they establish an annotated working taxonomic backbone with the Curator that Symbiota will then be able to ingest before samples can be imported into the portal; A researcher wants to export a biodiversity portal taxonomy at a given moment and wants to annotate and publish this version in an upcoming paper to establish scientific baselines for proper taxonomic communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

„A Prologue of Git and SVN“. International Journal of Engineering and Advanced Technology 9, Nr. 1 (30.10.2019): 988–90. http://dx.doi.org/10.35940/ijeat.a9451.109119.

Der volle Inhalt der Quelle
Annotation:
Version Control Software or Revision Control Software are the most important things in the world of software development. In this paper, we have described two version control tools: Git and Apache Subversion. Git comes as free and open source code management and version control system which is disseminated with the GNU general public license. Apache Subversion abbreviated as SVN is one amongst a software versioning and revision control systems given as open source under Apache License. Git design, its functionality, and usage of Git and SVN are discussed in this paper. The goal of this research paper is to accentuate on GIT and SVN tools, evaluate and compare five version control tools to ascertain their usage and efficacy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Goddard, Lisa. „Developing the Read/Write Library“. Scholarly and Research Communication 7, Nr. 2/3 (08.11.2016). http://dx.doi.org/10.22230/src.2016v7n2/3a255.

Der volle Inhalt der Quelle
Annotation:
Background: This article considers the use of Fedora-based library digital asset management systems (DAMS) as digital humanities (DH) research platforms.Analysis: The features of DAMS are evaluated to identify the ways in which they can currently meet researcher needs, and to suggest areas where further development is necessary.Conclusion and implications: Fedora-based DAMS hold great promise as the basis of digital humanities research platforms. Mature functionality is available for identity management, file and metadata management, versioning, publishing, social media sharing, discovery, interoperability, and long-term preservation. Further development is necessary in order to incorporate annotation, mark-up, and text analysis tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Brown, Susan, und John Simpson. „The Changing Culture of Humanities Scholarship: Iteration, Recursion, and Versions in Scholarly Collaboration Environments“. Scholarly and Research Communication 5, Nr. 4 (16.12.2014). http://dx.doi.org/10.22230/src.2014v5n4a191.

Der volle Inhalt der Quelle
Annotation:
The non-linear and iterative nature of scholarly research processes presents complexities with respect to how online collaborative systems manage versions both within interfaces and at the back end. This article maps out a two-part framework for thinking about versions and versioning in the context of contemporary scholarship and data preservation. The first presents four notable qualities of digital textuality that are intensified by the digital turn, and the second considers technical considerations flowing from these characteristics. The authors argue that the management of large humanities data sets and the design of associated interfaces, tools, and infrastructure need to recognize and preserve the dynamic, living nature of digital cultural artifacts and of scholarship on culture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Parland-von Essen, Jessica, Katja Fält, Zubair Maalick, Miika Alonen und Eduardo Gonzalez. „Supporting FAIR data: categorization of research data as a tool in data management“. Informaatiotutkimus 37, Nr. 4 (31.12.2018). http://dx.doi.org/10.23978/inf.77419.

Der volle Inhalt der Quelle
Annotation:
The demand for implementation of the FAIR data principles is in many cases difficult for a researcher to adhere to in efficient ways due to lacking tools. We suggest categorizing data in a more extensive and systematic way with focus on the inherent properties of the data as means to enhancing research data services. After discussing different approaches to categorizing data, we propose a tripartite research data categorization based around the inherent aspect of stability. The three research data types are operational data, generic research data and research data publications. Generic research data is validated data and can be cumulative, i.e. data can be added without versioning, however if it is dynamic it should be versioned. Generic research data should be separated from immutable dataset publications that are published for reasons of reproducibility of specific research results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Gilbert, Edward, Nico Franz und Beckett Sterner. „Historical Overview of the Development of the Symbiota Specimen Management Software and Review of the Interoperability Challenges and Opportunities Informing Future Development“. Biodiversity Information Science and Standards 4 (30.09.2020). http://dx.doi.org/10.3897/biss.4.59077.

Der volle Inhalt der Quelle
Annotation:
Symbiota (Gries et al. 2014) is an open-source software platform designed to function as a biodiversity Content Management System (CMS) for specimen-based datasets. Primarily in North America though also increasingly on other continents, the Symbiota software platform has risen to prominence in the past ten years as one of the more heavily accessed mid-level aggregation tools for assembling, managing, and distributing datasets associated with biological collections. There are more than 50 public Symbiota portals being managed and promoted by various biodiversity projects and communities. Together, these portals assist in the distribution and mobilization of more than 55 million specimen and 20 million image records associated with hundreds of institutions. The central premise of a standard Symbiota installation is to function as a mini-aggregator capable of integrating multiple occurrence datasets that collectively represent a community-based research data perspective. Datasets are typically limited to geographic and taxonomic scopes that best represent the community of researchers leading the project. Symbiota portals often publish "snapshot records" that originate from external management systems but otherwise align with the portal's community of practice and data focus. Specimen management tools integrated into the Symbiota platform also support the ability to manage occurrence data directly within the portal as “live datasets”. The software has become widely adopted as a data management platform. Approximately 550 specimen datasets consisting of more than 14 million specimen records are being directly managed within a portal instance. The appeal of Symbiota as an occurrence management tool is also exemplified by the fact that 18 of the 30 federally funded Thematic Collection Networks (https://www.idigbio.org/content/thematic-collections-networks) have elected to use Symbiota as their central data management system. Symbiota's well-developed data ingestion tools, coupled with the ability to store import profile definitions, allows data snapshots to be partially coordinated with source data managed within a variety of remote systems such as Specify (https://specifysoftware.org), EMu (https://emu.axiell.com), Integrated Publishing Toolkit (IPT, https://gbif.org/ipt) publishers, as well as other Symbiota instances. As with Global Biodiversity Information Facility (GBIF) and Integrated Digitized Biocollections (iDigBio) publishing models, data snapshots are periodically refreshed, based on transfer protocols compliant with Darwin Core (DwC) data exchange standards. The Symbiota data management tools provide the means for the community of experts running the portal to annotate and augment snapshot datasets with the goal of improving the overall fitness-for-use of the aggregated dataset. Even though a data refresh from the source dataset would effectively replace the data improvement with the original flawed data, the system’s ability to maintain data versioning of all annotations made within the portal allows data improvements to be reapplied. However, inadequate support for bi-directional data flow between the portal and the source collection effectively isolates the annotations within the portal. On one hand, the mini-aggregator model of Symbiota can be viewed as compounding the further fragmentation of occurrence data. Rather than conforming to the vision of pushing data from the source, to the global aggregators and ultimately the research community, specimen data are being pushed from source collections to a growing array of mini-aggregators. On the other hand, community portals have the ability to incentivize experts and enthusiasts to publish high-quality, "data-intelligent" biodiversity data products with the potential of channeling data improvements back to the source. This presentation will begin with a historical review of the development of the Symbiota model including major shifts in the evolution of the development goals. We will discuss the benefits and shortcomings of the data model and provide a description of schema modifications that are currently in development. We will also discuss the successes and challenges associated with building data commons directly associated with communities of researchers. We will address the software’s role in mobilizing occurrence data within North America and the efficacy of adhering to the FAIR use principles of making datasets findable, accessible, interoperable, and reusable (Wilkinson et al. 2016). Finally, we will discuss interoperability developments that we hope will improve the flow of data annotations between decentralized networks of data portals and the original data providers at the source.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Nowotarski, Stephanie H., Erin L. Davies, Sofia M. C. Robb, Eric J. Ross, Nicolas Matentzoglu, Viraj Doddihal, Mol Mir, Melainia McClain und Alejandro Sánchez Alvarado. „Planarian Anatomy Ontology: a resource to connect data within and across experimental platforms“. Development 148, Nr. 15 (01.08.2021). http://dx.doi.org/10.1242/dev.196097.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT As the planarian research community expands, the need for an interoperable data organization framework for tool building has become increasingly apparent. Such software would streamline data annotation and enhance cross-platform and cross-species searchability. We created the Planarian Anatomy Ontology (PLANA), an extendable relational framework of defined Schmidtea mediterranea (Smed) anatomical terms used in the field. At publication, PLANA contains over 850 terms describing Smed anatomy from subcellular to system levels across all life cycle stages, in intact animals and regenerating body fragments. Terms from other anatomy ontologies were imported into PLANA to promote interoperability and comparative anatomy studies. To demonstrate the utility of PLANA as a tool for data curation, we created resources for planarian embryogenesis, including a staging series and molecular fate-mapping atlas, and the Planarian Anatomy Gene Expression database, which allows retrieval of a variety of published transcript/gene expression data associated with PLANA terms. As an open-source tool built using FAIR (findable, accessible, interoperable, reproducible) principles, our strategy for continued curation and versioning of PLANA also provides a platform for community-led growth and evolution of this resource.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Marcer, Arnald, Agustí Escobar, Anna Díaz und Francesc Uribe. „Towards a Federated List of Versioned Georeferenced Site Names: A local experience“. Biodiversity Information Science and Standards 3 (18.06.2019). http://dx.doi.org/10.3897/biss.3.37079.

Der volle Inhalt der Quelle
Annotation:
The Museu de Ciències Naturals de Barcelona (MCNB) holds a collection of ca. 130,000 digitally registered zoological specimens collected around the world and dating from 1852 to the present. Specimens are shared between non-arthropod invertebrates (32% of the collection), arthropod invertebrates (39%) and vertebrates (29%). The museum recognizes georeferencing as a crucial process for mobilising its collections into digitally accessible information and has provisioned resources in an ongoing georeferencing process for more than 10 years. The aim of this poster is to show how a bottom-up model benefits the georeferencing work. Site names as written by collectors in specimen tags need to be converted into spatial coordinates with precision and uncertainty information. In order to guide this process, the research community has provided a set of georeferencing protocols and recommendations which start with the physical tagged specimen and end with a digital record in a public biodiversity database. In addition, having direct knowledge of the territory where the tagged locality lies and access to the most precise local cartography helps to ensure that a high quality georeferenced digital record can be created. Many localities described in specimen tags carry place names which cannot be found or correctly derived from small scale maps or digital map resources such as Google Maps. Often, it is necessary to have access to more locally detailed cartography and, sometimes, historical cartography. Therefore, we recommend that this is added to existing protocols, and that institutions from different geographical areas to pull together their efforts in order to create a federated list of georeferenced site names. This would be a more efficient strategy of generating better quality gazetteers, streamlining efforts and making more efficient use of the collective resources since many tagged site names are the same for different specimens across multiple collections. The georeferencing process is ultimately dependent on the cartography available to the georeferencer at the moment of converting the tagged collection event into a digital record. A georeference record may be subject to future improvements in future georeferencing revisions. Newly available cartography or methods of reporting location and uncertainty may lead to a revision of any given georeferenced record. Thus, any federated databasing effort to list georeferenced site names should include versioning capabilities. In order to address the need to incorporate expert knowledge into georeferecing efforts, alongside versioning site names, the MCNB has developed a software platform tool. This platform, called Georef, is implemented as a web application with storage, querying, editing and visualizing capabilities for both site names and the cartographic resources used in the georeferencing process. Georef is now also used by other institutions from the Western Mediterranean basin with which MCNB shares data and local knowledge. A public API has been developed for accessing the georeferenced sites and to support a crowd-sourced tool which allows the public to comment and propose edits to location data. Ultimately, this tool improves the quality of the georeferencing and research using these data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Nicholson, Nicholas Charles, Francesco Giusti, Manola Bettio, Raquel Negrao Carvalho, Nadya Dimitrova, Tadeusz Dyba, Manuela Flego, Luciana Neamtiu, Giorgia Randi und Carmen Martos. „An ontology-based approach for developing a harmonised data-validation tool for European cancer registration“. Journal of Biomedical Semantics 12, Nr. 1 (06.01.2021). http://dx.doi.org/10.1186/s13326-020-00233-x.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Population-based cancer registries constitute an important information source in cancer epidemiology. Studies collating and comparing data across regional and national boundaries have proved important for deploying and evaluating effective cancer-control strategies. A critical aspect in correctly comparing cancer indicators across regional and national boundaries lies in ensuring a good and harmonised level of data quality, which is a primary motivator for a centralised collection of pseudonymised data. The recent introduction of the European Union’s general data-protection regulation (GDPR) imposes stricter conditions on the collection, processing, and sharing of personal data. It also considers pseudonymised data as personal data. The new regulation motivates the need to find solutions that allow a continuation of the smooth processes leading to harmonised European cancer-registry data. One element in this regard would be the availability of a data-validation software tool based on a formalised depiction of the harmonised data-validation rules, allowing an eventual devolution of the data-validation process to the local level. Results A semantic data model was derived from the data-validation rules for harmonising cancer-data variables at European level. The data model was encapsulated in an ontology developed using the Web-Ontology Language (OWL) with the data-model entities forming the main OWL classes. The data-validation rules were added as axioms in the ontology. The reasoning function of the resulting ontology demonstrated its ability to trap registry-coding errors and in some instances to be able to correct errors. Conclusions Describing the European cancer-registry core data set in terms of an OWL ontology affords a tool based on a formalised set of axioms for validating a cancer-registry’s data set according to harmonised, supra-national rules. The fact that the data checks are inherently linked to the data model would lead to less maintenance overheads and also allow automatic versioning synchronisation, important for distributed data-quality checking processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Eibeck, Andreas, Arkadiusz Chadzynski, Mei Qi Lim, Kevin Aditya, Laura Ong, Aravind Devanand, Gourab Karmakar et al. „A Parallel World Framework for scenario analysis in knowledge graphs“. Data-Centric Engineering 1 (2020). http://dx.doi.org/10.1017/dce.2020.6.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper presents Parallel World Framework as a solution for simulations of complex systems within a time-varying knowledge graph and its application to the electric grid of Jurong Island in Singapore. The underlying modeling system is based on the Semantic Web Stack. Its linked data layer is described by means of ontologies, which span multiple domains. The framework is designed to allow what-if scenarios to be simulated generically, even for complex, inter-linked, cross-domain applications, as well as conducting multi-scale optimizations of complex superstructures within the system. Parallel world containers, introduced by the framework, ensure data separation and versioning of structures crossing various domain boundaries. Separation of operations, belonging to a particular version of the world, is taken care of by a scenario agent. It encapsulates functionality of operations on data and acts as a parallel world proxy to all of the other agents operating on the knowledge graph. Electric network optimization for carbon tax is demonstrated as a use case. The framework allows to model and evaluate electrical networks corresponding to set carbon tax values by retrofitting different types of power generators and optimizing the grid accordingly. The use case shows the possibility of using this solution as a tool for CO2 reduction modeling and planning at scale due to its distributed architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie