Auswahl der wissenschaftlichen Literatur zum Thema „Versioning tools“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Versioning tools" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Versioning tools"

1

Brahmia, Zouhaier, Fabio Grandi, Barbara Oliboni und Rafik Bouaziz. „Schema Change Operations for Full Support of Schema Versioning in the τXSchema Framework“. International Journal of Information Technology and Web Engineering 9, Nr. 2 (April 2014): 20–46. http://dx.doi.org/10.4018/ijitwe.2014040102.

Der volle Inhalt der Quelle
Annotation:
tXSchema (Currim et al., 2004) is a framework (a language and a suite of tools) for the creation and validation of time-varying XML documents. A tXSchema schema is composed of a conventional XML Schema annotated with physical and logical annotations. All components of a tXSchema schema can evolve over time to reflect changes in the real-world. Since many applications need to keep track of both data and schema evolution, schema versioning has been long advocated to be the best solution to do this. In this paper, we complete the tXSchema framework, which is predisposed from the origin to support schema versioning, with the definition of the operations which are necessary to exploit such feature and make schema versioning functionalities available to final users. Moreover, we propose a new technique for schema versioning in tXSchema, allowing a complete and safe management of schema changes. It supports both versioning of conventional schema and versioning of annotations, in an integrated manner. For each component of a tXSchema schema, our approach provides a complete and sound set of change primitives and a set of high-level change operations, for the maintenance of such a component and defines their operational semantics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eriksson, Helen, und Lars Harrie. „Versioning of 3D City Models for Municipality Applications: Needs, Obstacles and Recommendations“. ISPRS International Journal of Geo-Information 10, Nr. 2 (28.01.2021): 55. http://dx.doi.org/10.3390/ijgi10020055.

Der volle Inhalt der Quelle
Annotation:
The use of 3D city models is changing from visualization to complex use cases where they act as 3D base maps. This requires links to registers and continuous updating of the city models. Still, most models never change or are recreated instead of updated. This study identifies obstacles to version management of 3D city models and proposes recommendations to overcome them, with a main focus on the municipality perspective, foremost in the planning and building processes. As part of this study, we investigate whether national building registers can control the version management of 3D city models. A case study based on investigations of standards, interviews and a review of tools is presented. The study uses an architectural model divided into four layers: data collection, building theme, city model and application. All layers require changes when implementing a new versioning method: the data collection layer requires restructuring of technical solutions and work processes, storage of the national building register requires restructuring, versioning capabilities must be propagated to the city model layer, and tools at the application layer must handle temporal information better. Strong incentives for including versioning in 3D city models are essential, as substantial investment is required to implement versioning in all the layers. Only capabilities required by applications should be implemented, as the complexity grows with the number of versioning functionalities. One outcome of the study is a recommendation to link 3D city models more closely to building registers. This enables more complex use in, e.g., building permits and 3D cadastres, and authorities can fetch required (versioning) information directly from the city model layer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

S. Ellouze, A., A. Jmal und R. Bouaziz. „Service Oriented Tools for Medical Records Management and Versioning“. American Journal of Bioinformatics Research 2, Nr. 4 (09.08.2012): 33–39. http://dx.doi.org/10.5923/j.bioinformatics.20120204.01.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (21.06.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.1.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (22.11.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.2.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Devisetty, Upendra Kumar, Kathleen Kennedy, Paul Sarando, Nirav Merchant und Eric Lyons. „Bringing your tools to CyVerse Discovery Environment using Docker“. F1000Research 5 (05.12.2016): 1442. http://dx.doi.org/10.12688/f1000research.8935.3.

Der volle Inhalt der Quelle
Annotation:
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Rashid, Junaid, Waqar Mehmood und Muhammad Wasif Nisar. „A Survey of Model Comparison Strategies and Techniques in Model Driven Engineering“. International Journal of Software Engineering and Technologies (IJSET) 1, Nr. 3 (01.12.2016): 165. http://dx.doi.org/10.11591/ijset.v1i3.4579.

Der volle Inhalt der Quelle
Annotation:
This Survey paper shows the recent state of model comparison as it’s applies to Model Driven engineering. In Model Driven Engineering to calculate the difference between the models is a very important and challenging task. There are number of tasks involved in Model differencing that firstly starts with identifying and matching the elements of the model. In this paper we discuss how model matching is accomplished, the strategies, techniques and the types of the model. In this paper we also discuss the future direction. We find out that many of the latest model comparison strategies are geared near enabling Meta model and similarity based matching. Therefore model versioning is the most dominant application of the model comparison. Recently to work on comparison for versioning has begun to deteriorate, giving way to different applications. Ultimately there is wide change among the tools in the measure of client exertion needed to perform model comparisons, as some require more push to encourage more sweeping statement and expressive force.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

RILLING, JUERGEN, RENÉ WITTE, PHILIPP SCHUEGERL und PHILIPPE CHARLAND. „BEYOND INFORMATION SILOS — AN OMNIPRESENT APPROACH TO SOFTWARE EVOLUTION“. International Journal of Semantic Computing 02, Nr. 04 (Dezember 2008): 431–68. http://dx.doi.org/10.1142/s1793351x08000567.

Der volle Inhalt der Quelle
Annotation:
Nowadays, software development and maintenance are highly distributed processes that involve a multitude of supporting tools and resources. Knowledge relevant for a particular software maintenance task is typically dispersed over a wide range of artifacts in different representational formats and at different abstraction levels, resulting in isolated 'information silos'. An increasing number of task-specific software tools aim to support developers, but this often results in additional challenges, as not every project member can be familiar with every tool and its applicability for a given problem. Furthermore, historical knowledge about successfully performed modifications is lost, since only the result is recorded in versioning systems, but not how a developer arrived at the solution. In this research, we introduce conceptual models for the software domain that go beyond existing program and tool models, by including maintenance processes and their constituents. The models are supported by a pro-active, ambient, knowledge-based environment that integrates users, tasks, tools, and resources, as well as processes and history-specific information. Given this ambient environment, we demonstrate how maintainers can be supported with contextual guidance during typical maintenance tasks through the use of ontology queries and reasoning services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

ONOMA, A. K., H. SUGANUMA, M. POONAWALA, S. SUBRAMANIAN, W. T. TSAI und T. SYOMURA. „AN OBJECT-BASED ENVIRONMENT (OPUSDEI) FOR SOFTWARE DEVELOPMENT AND MAINTENANCE“. International Journal on Artificial Intelligence Tools 05, Nr. 04 (Dezember 1996): 447–71. http://dx.doi.org/10.1142/s0218213096000262.

Der volle Inhalt der Quelle
Annotation:
This paper discusses an object-based software development and maintenance environment, Opusdei, built and used for several years at Hitachi Software Engineering (HSK - Since 1994, University of Minnesota has been involved in the Opusdei project.) Industrial software is usually large, has many versions, undergoes frequent changes, and is developed concurrently by multiple programmers. Opusdei was designed to handle various problems inherent in such industrial environments. In Opusdei, all information needed for development is stored using an uniform representation in a central repository, and the various documentation and views of the software artifacts can be generated automatically using the tool repository. Opusdeis’ innovative capabilities are 1) uniform software artifacts representation 2) inter-relation and traceability maintenance among software artifacts 3) tools coordination and tool integration using tool composition scenarios 4) automatic documentation and versioning control. Tool coordination and composition has been discussed in the literature as a possible way to make software development environments more intelligent. Opusdei provides a uniform representation of software artifacts and tools which is an essential first step in addressing the issues of tool coordination and composition. Opusdei has been operational for several years and has been used in many large software development projects. The productivity gain reported for some of these projects, by using Opusdei ranged from 50–90%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Jennings-Antipov, Laura D., und Timothy S. Gardner. „Digital publishing isn't enough: the case for ‘blueprints’ in scientific communication“. Emerging Topics in Life Sciences 2, Nr. 6 (21.12.2018): 755–58. http://dx.doi.org/10.1042/etls20180165.

Der volle Inhalt der Quelle
Annotation:
Since the time of Newton and Galileo, the tools for capturing and communicating science have remained conceptually unchanged — in essence, they consist of observations on paper (or electronic variants), followed by a ‘letter’ to the community to report your findings. These age-old tools are inadequate for the complexity of today's scientific challenges. If modern software engineering worked like science, programmers would not share open source code; they would take notes on their work and then publish long-form articles about their software. Months or years later, their colleagues would attempt to reproduce the software based on the article. It sounds a bit silly, and yet even, this level of prose-based methodological discourse has deteriorated in science communication. Materials and Methods sections of papers are often a vaguely written afterthought, leaving researchers baffled when they try to repeat a published finding. It's time for a fundamental shift in scientific communication and sharing, a shift akin to the advent of computer-aided design and source code versioning. Science needs reusable ‘blueprints’ for experiments replete with the experiment designs, material flows, reaction parameters, data, and analytical procedures. Such an approach could establish the foundations for truly open source science where these scientific blueprints form the digital ‘source code’ for a supply chain of high-quality innovations and discoveries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Versioning tools"

1

Dronamraj, Rakesh. „Tools and Versioning for GUI text in SDP3 : Rakesh Dronamraj“. Thesis, Linköpings universitet, MDALAB - Human Computer Interfaces, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60080.

Der volle Inhalt der Quelle
Annotation:
Scania, one of the heavy engines manufacturers, produces Scania Diagnos Programmer 3 (SDP3) to facilitate repair process in their workshops. SDP3 is localizable software which challenges to separate User Interface strings (UI strings) during development process and later combine with the localized strings for local user access. The objective of this report is to provide knowledgeable solution for Graphical User Interface (GUI) development, especially with respect to synchronization of UI strings in SDP3.The migration of SDP3 from .NET 3.0v framework to .NET 3.5v framework satisfies modern standards and needs. With regards to migration of SDP3’s localization process, I have attempted to summarize major .NET 3.5v framework methods that can be used for localization of GUI text in SDP3. Experiments show that tools used to facilitate the localization process also lack important features. Although pre-build process and post-build process provide promising solutions for localization, using them along with some proprietary localization tool should result in more features, better and faster production cycle. However, proprietary localization tool have to be used with anyone of the localization methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Poudel, Pavan. „Tools and Techniques for Efficient Transactions“. Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Poudel, Pavan. „Tools and Techniques for Efficient Transactions“. Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bárteček, Bronislav. „Výběr a implementace systému pro řízení softwarového vývoje“. Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2021. http://www.nusl.cz/ntk/nusl-444594.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the analysis of the current state of company. Subsequently, based on the obtained data, it designs and implements a software development management system. The diploma thesis describes the theoretical basis of the work, the requirements of the company. When choosing a system, it takes into account the individual needs of the selected company. Part of the diploma thesis is a description of the implementation and deployment of the system in the company together with the time analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Steinert, Bastian. „Built-in recovery support for explorative programming : preserving immediate access to static and dynamic information of intermediate development states“. Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7130/.

Der volle Inhalt der Quelle
Annotation:
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Diese Arbeit präsentiert Konzepte und die zugehörige Werkzeugunterstützung um einen komplementären Umgang mit Wiederherstellungsbedürfnissen zu ermöglichen. Programmierer haben Bedarf zur Wiederherstellung eines früheren Entwicklungszustandes oder Teils davon, wenn ihre Änderungen ungewünschte Implikationen aufzeigen. Wenn dieser Bedarf plötzlich und unerwartet auftritt, dann ist die notwendige Wiederherstellungsarbeit häufig mühsam und aufwendig. Zur Vermeidung mühsamer Arbeit empfiehlt die Literatur die Vermeidung von unerwarteten Wiederherstellungsbedürfnissen durch einen strukturierten und disziplinierten Programmieransatz, welcher die Verwendung verschiedener bewährter Praktiken vorsieht. Diese Praktiken sind zum Beispiel: nur an einer Sache gleichzeitig zu arbeiten, immer nur kleine Schritte auszuführen, aber auch der sachgemäße Einsatz von Versionskontroll- und Testwerkzeugen. Jedoch ist der Versuch des Abwendens unerwarteter Wiederherstellungsbedürfnisse sowohl zeitintensiv als auch fehleranfällig. Einerseits erfordert es unverhältnismäßig hohen Aufwand, das Risiko des Eintretens unerwarteter Situationen auf ein Minimum zu reduzieren. Andererseits ist eine zeitsparende selektive Ausführung der empfohlenen Praktiken kaum hinreichend, um Wiederherstellungssituationen zu vermeiden. Zudem bringt die ständige Notwendigkeit an Voraussicht und Selbstkontrolle Nachteile mit sich. Dies ist ermüdend und erschwert das kreative Problemlösen. Diese Arbeit schlägt vor, Wiederherstellungsaufgaben zu vereinfachen und beschleunigen, und stellt entsprechende Werkzeugunterstützung namens CoExist vor. Solche zielgerichtete Werkzeugunterstützung macht aus unvorhergesehenen mühsamen Wiederherstellungssituationen eine konstruktive Erfahrung. Damit ist Wiederherstellung auch dann leicht und schnell durchzuführen, wenn explizit gespeicherte Zwischenstände fehlen oder die Tests für einige Zeit ignoriert wurden. Wenn Fehler und unerwartete Ein- sichten nicht länger mit mühsamen Schadensersatz verbunden sind, fühlen sich Programmierer eher dazu ermutig, Quelltext zu ändern, um dabei darüber zu reflektieren, und nehmen nicht erst dann Änderungen vor, wenn sie diese gedanklich strukturiert und evaluiert haben. Diese Arbeit berichtet weiterhin von einer Implementierung der vorgeschlagenen Werkzeugunterstützung in der Squeak/Smalltalk Entwicklungsumgebung. Regelmäßige Tests von Laufzeitverhalten und Benutzbarkeit begleiteten die Entwicklung. Zudem prüft die Arbeit, ob sich die Verwendung der vorgeschlagenen Werkzeuge auf die Leistung der Programmierer auswirkt. In einem kontrollierten Experiment, verbesserten 22 Teilnehmer den Aufbau von zwei verschiedenen Anwendungen. Unter der Verwendung einer Versuchsanordnung mit wiederholter Messung, ermittelte die Studie die Auswirkung von CoExist auf die Programmierleistung. Das Ergebnis der Analyse von 88 Programmierstunden deutet darauf hin, dass sich eingebaute Werkzeugunterstützung für Wiederherstellung, wie sie mit CoExist bereitgestellt wird, positiv bei der Bearbeitung von unstrukturierten ergebnisoffenen Programmieraufgaben auswirkt.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Colliander, Celik Julius Recep. „Plutt: A tool for creating type-safe and version-safe microfrontends“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280090.

Der volle Inhalt der Quelle
Annotation:
Microfrontend applications are composed of multiple smaller frontend applications, which are integrated at run-time. As with microservices, microfrontends can be updated in production at any time. There are no technological restrictions for releasing API-breaking updates. Therefore it is difficult to trust microfrontend applications to perform reliably in run-time and to introduce API-breaking updates without the risk of breaking consumers. This thesis presents Plutt, a tool that provides automatic guarantees for safely consuming microfrontends, by ensuring that updates in run-time are compatible. By using Plutt, consumers can be confident that a provided microfrontend will per- form the same during production as in development. Likewise, microfrontend providers can release updates without being concerned about how it will affect consumers. Moreover, a comprehensive survey about microfrontends is presented, where five industry experts are interviewed. Aspects that are not found in existing literature are discovered, which contributes to a broader knowledge base that helps future microfrontend research.
Mikrofrontend-applikationer är sammansatta av flera mindre frontend-applikationer som integreras under exekvering. Precis som med mikrotjänster, kan mikrofrontends bytas ut i produktion när som helst. Det saknas teknologiska restriktioner för att publicera API-brytande uppdateringar. Därför är det svårt att lita på att en mikrofrontend-applikation beter sig tillförlitligt under exekvering samt att introducera API-brytande uppdateringar utan att riskera att förstöra konsumenter. Det här examensarbetet presenterar Plutt, ett verktyg som erbjuder automatiska garantier för att säkert konsumera mikrofrontends genom att säkerställa att uppdateringar som introduceras i körtid är kompatibla. Genom att använda Plutt, kan konsumenter vara trygga i vetskapen att en försedd mikrofrontend presterar likadant under produktion som i utveckling. Samtidigt kan utvecklare som förser mikrofrontends släppa uppdateringar utan att bekymra sig över hur det påverkar konsumenter. Utöver Plutt, presenteras en grundlig kartläggning över mikrofrontends, där fem experter från industrin är intervjuade. Aspekter som inte hittas i existerande litteratur är upptäckta, vilket kunskapsbas och framtida forskning om mikrofrontends.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schubert, Chris, Georg Seyerl und Katharina Sack. „Dynamic Data Citation Service-Subset Tool for Operational Data Management“. MDPI, 2019. http://dx.doi.org/10.3390/data4030115.

Der volle Inhalt der Quelle
Annotation:
In earth observation and climatological sciences, data and their data services grow on a daily basis in a large spatial extent due to the high coverage rate of satellite sensors, model calculations, but also by continuous meteorological in situ observations. In order to reuse such data, especially data fragments as well as their data services in a collaborative and reproducible manner by citing the origin source, data analysts, e.g., researchers or impact modelers, need a possibility to identify the exact version, precise time information, parameter, and names of the dataset used. A manual process would make the citation of data fragments as a subset of an entire dataset rather complex and imprecise to obtain. Data in climate research are in most cases multidimensional, structured grid data that can change partially over time. The citation of such evolving content requires the approach of "dynamic data citation". The applied approach is based on associating queries with persistent identifiers. These queries contain the subsetting parameters, e.g., the spatial coordinates of the desired study area or the time frame with a start and end date, which are automatically included in the metadata of the newly generated subset and thus represent the information about the data history, the data provenance, which has to be established in data repository ecosystems. The Research Data Alliance Data Citation Working Group (RDA Data Citation WG) summarized the scientific status quo as well as the state of the art from existing citation and data management concepts and developed the scalable dynamic data citation methodology of evolving data. The Data Centre at the Climate Change Centre Austria (CCCA) has implemented the given recommendations and offers since 2017 an operational service on dynamic data citation on climate scenario data. With the consciousness that the objective of this topic brings a lot of dependencies on bibliographic citation research which is still under discussion, the CCCA service on Dynamic Data Citation focused on the climate domain specific issues, like characteristics of data, formats, software environment, and usage behavior. The current effort beyond spreading made experiences will be the scalability of the implementation, e.g., towards the potential of an Open Data Cube solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Versioning tools"

1

Keidar, Idit, und Dmitri Perelman. „Multi-versioning in Transactional Memory“. In Transactional Memory. Foundations, Algorithms, Tools, and Applications, 150–65. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14720-8_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bayoudhi, Leila, Najla Sassi und Wassim Jaziri. „A Survey on Versioning Approaches and Tools“. In Advances in Intelligent Systems and Computing, 1155–64. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71187-0_107.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bendix, Lars, Per Nygaard Larsen, Anders Ingemann Nielsen und Jesper Lai Søndergaard Petersen. „CoEd — A tool for versioning of hierarchical documents“. In System Configuration Management, 174–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0053888.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Grandi, Fabio. „SVMgr: A Tool for the Management of Schema Versioning“. In Lecture Notes in Computer Science, 860–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30464-7_73.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Nehéz, K., P. Mileff und O. Hornyák. „CAD tools for knowledge based part design and assembly versioning“. In Solutions for Sustainable Development, 49–55. CRC Press, 2019. http://dx.doi.org/10.1201/9780367824037-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Eder, Johann, und Karl Wiggisser. „Data Warehouse Maintenance, Evolution and Versioning“. In Enterprise Information Systems, 566–83. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61692-852-0.ch301.

Der volle Inhalt der Quelle
Annotation:
Data Warehouses typically are building blocks of decision support systems in companies and public administration. The data contained in a data warehouse is analyzed by means of OnLine Analytical Processing tools, which provide sophisticated features for aggregating and comparing data. Decision support applications depend on the reliability and accuracy of the contained data. Typically, a data warehouse does not only comprise the current snapshot data but also historical data to enable, for instance, analysis over several years. And, as we live in a changing world, one criterion for the reliability and accuracy of the results of such long period queries is their comparability. Whereas data warehouse systems are well prepared for changes in the transactional data, they are, surprisingly, not able to deal with changes in the master data. Nonetheless, such changes do frequently occur. The crucial point for supporting changes is, first of all, being aware of their existence. Second, once you know that a change took place, it is important to know which change (i.e., knowing about differences between versions and relations between the elements of different versions). For data warehouses this means that changes are identified and represented, validity of data and structures are recorded and this knowledge is used for computing correct results for OLAP queries. This chapter is intended to motivate the need for powerful maintenance mechanisms for data warehouse cubes. It presents some basic terms and definitions for the common understanding and introduces the different aspects of data warehouse maintenance. Furthermore, several approaches addressing the problem are presented and classified by their capabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Garijo, Daniel, und María Poveda-Villalón. „Best Practices for Implementing FAIR Vocabularies and Ontologies on the Web“. In Applications and Practices in Ontology Design, Extraction, and Reasoning. IOS Press, 2020. http://dx.doi.org/10.3233/ssw200034.

Der volle Inhalt der Quelle
Annotation:
With the adoption of Semantic Web technologies, an increasing number of vocabularies and ontologies have been developed in different domains, ranging from Biology to Agronomy or Geosciences. However, many of these ontologies are still difficult to find, access and understand by researchers due to a lack of documentation, URI resolving issues, versioning problems, etc. In this chapter we describe guidelines and best practices for creating accessible, understandable and reusable ontologies on the Web, using standard practices and pointing to existing tools and frameworks developed by the Semantic Web community. We illustrate our guidelines with concrete examples, in order to help researchers implement these practices in their future vocabularies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jaziri, Wassim, Najla Sassi und Dhouha Damak. „Using Temporal Versioning and Integrity Constraints for Updating Geographic Databases and Maintaining Their Consistency“. In Geospatial Research, 1137–67. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9845-1.ch053.

Der volle Inhalt der Quelle
Annotation:
The use of geographic data has become a widespread concern, mainly within applications related to spatial planning and spatial decision-making. Therefore, changing environments require databases adaptable to changes that occur over time. Thus, supporting geographic information evolution is essential and extremely important within changing environments. The evolution is expressed in the geographic database by series of update operations that should maintain its consistency. This paper proposes an approach for updating geographic databases, based on update operators and algorithms of constraints integrity checking. Temporal versioning is used to keep the track of changes. Every version presents the state of the geographic database at a given time. Algorithms of constraints integrity checking allow maintaining the database consistency upon its update. To implement our approach and assist users in the evolution process, the GeoVersioning tool is developed and tested on a sample geographic database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Versioning tools"

1

Ellouze, Afef Samet, Rafik Bouaziz und Ahmed Jmal. „Service Oriented Tools for Medical Records Management and Versioning“. In 2010 Second International Conference on Advances in Databases, Knowledge, and Data Applications. IEEE, 2010. http://dx.doi.org/10.1109/dbkda.2010.20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Elsen, Rickard, Inggriani Liem und Saiful Akbar. „Software versioning quality parameters: Automated assessment tools based on the parameters“. In 2016 International Conference on Data and Software Engineering (ICoDSE). IEEE, 2016. http://dx.doi.org/10.1109/icodse.2016.7936139.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

„GUIDELINES FOR A DYNAMIC ONTOLOGY - Integrating Tools of Evolution and Versioning in Ontology“. In International Conference on Knowledge Management and Information Sharing. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003653201730179.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kim, Guehee, Yoshio Suzuki und Naoya Teshima. „Network Computing Infrastructure to Share Tools and Data in GNEP“. In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75304.

Der volle Inhalt der Quelle
Annotation:
Network computing infrastructure for sharing tools and data was implemented to support international collaboration. In designing the system, we focused on three issues: accessibility, security, and usability. In the implementation, we integrated existing network and web technologies into the infrastructure by introducing the authentication gateway. For the first issue, SSL-VPN (Security Socket Layer – Virtual Private Network) technology was adopted to access computing resources beyond firewalls. For the second issue, PKI (Public Key Infrastructure)-based authentication mechanism was used for access control. Shared key based file encryption was also used to protect against information leakage. The introduction of the authentication gateway enables to strengthen the security. To provide high usability, WebDAV (Web-based Distributed Authoring and Versioning) was used to provide users with a function to manipulate distributed files through a windows-like GUI (Graphical User Interface). These functions were integrated into a Grid infrastructure called AEGIS (Atomic Energy Grid InfraStructure). Web applications were developed on the infrastructure for dynamic community creation and information sharing. In this paper, we discuss design issues of the system and report the implementation of a prototype applied to share information for the international project GNEP (Global Nuclear Energy Partnership).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mikami, Hiroaki, Daisuke Sakamoto und Takeo Igarashi. „Micro-Versioning Tool to Support Experimentation in Exploratory Programming“. In CHI '17: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3025453.3025597.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hoppen, Martin, Juergen Rossmann, Michael Schluse, Ralf Waspe und Malte Rast. „Combining 3D Simulation Technology With Object-Oriented Databases: A Database Oriented Approach to Virtual Reality Systems“. In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48230.

Der volle Inhalt der Quelle
Annotation:
Using object-oriented databases as the primary data source in VR applications has a variety of advantages, but requires the development of new techniques concerning data modeling, data handling and data transfer from a Virtual Reality system’s point of view. The many advantages are outlined in the first part of this paper. We first introduce versioning and collaboration techniques as our main motivation. These can also be used in the traditional file based approach, but are much more powerful when realized with a database on an object and attribute level. Using an object-oriented approach to data modeling, objects of the real world can be modeled more intuitively by defining appropriate classes with their relevant attributes. Furthermore, databases can function as central communication hubs for consistent multi user interaction. Besides, the use of databases with open interface standards allows to easily cooperate with other applications such as modeling tools and other data generators. The second part of this paper focuses on our approach to seamlessly integrate such databases in Virtual Reality systems. For this we developed an object-oriented internal graph database and linked it to object-oriented external databases for central storage and collaboration. Object classes defined by XML data schemata allow to easily integrate new data models in VR applications at run-time. A fully transparent database layer in the simulation system makes it easy to interchange the external database. We present the basic structure of our simulation graph database, as well as the mechanisms which are used to transparently map data and meta-data from the external database to the simulation database. To show the validity and flexibility of our approach selected applications realized with our simulation system so far e. g. applications based on geoinformation databases such as forest inventory systems and city models, applications in the field of distributed control and simulation of assembly lines or database-driven virtual testbeds applications for automatic map generation in planetary landing missions are introduced.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mao, Mingrong, und Jiaxiang Zhou. „LvFS: A Lightweight File Versioning Tool for General Binary Files“. In 2015 2nd International Conference on Information Science and Control Engineering (ICISCE). IEEE, 2015. http://dx.doi.org/10.1109/icisce.2015.70.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Morgan, Rhys Gareth, Thomas Parenteau, Hemant Priyadarshi, Sachin Vijay Mathakari, Malo Le-Nel, Nicolas Lefebvre, Joydeep Somabathula, Kévin Le Prin und Patrick Jetter. „A Data-Centric Omnichannel Digital Platform for Configuring Subsea Field Developments“. In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31150-ms.

Der volle Inhalt der Quelle
Annotation:
Abstract Subsea field development planning can be a complicated undertaking requiring the coordination and collaboration of multiple engineering and commercial disciplines with competing objectives. Thus, finding the optimal development solution can be challenging. To combat this, a data-centric omnichannel digital platform for configuring subsea field developments has been created. The study workflow orchestrated by the digital platform is detailed along with an overview of the data model, functionality, and deliverables. A case study is presented to demonstrate the value delivered using this digital platform. The digital platform is inherently collaborative as it orchestrates specialist engineering tools and their workflows around the same data for study teams to configure subsea development solutions. The platform is composed of: A web-based graphical user interface that allows discipline and product engineers to collaboratively configure the system, products, planning and costing for an entire subsea field development scenario, leveraging the same base data i.e., a single source of truth. A proprietary data model covering system, product (e.g., hardware or equipment), activity planning and costing breakdowns, and; Microservices that directly attach engineering tools and their workflows to the digital platform to automate product design and analysis. A case study is presented to demonstrate the use of the digital platform on a subsea field development prospect and a qualitative comparison with the conventional way of working is made. The case study illustrates the use of a digital hardware configurator (subsea tree system configuration) and the automated planning workflow for an EPCI (Engineering, Procurement, Construction, and Installation) prospect enabled by the digital platform. The results of the case study demonstrate the platform values and benefits the digital platform delivers. The benefits are underpinned by the automated data transfer, the versioning functionality, software logic, and the common base data used by the microservices. The benefits that have been found when compared with the conventional way of working include: Faster validation of alternative development scenarios, meaning that more concepts and sensitivities can be investigated in the same length of time; A reduction in the overall lead time and person hours required to configure and optimize a field development solution; Design risk reduction, and; Efficient and consistent transition of data via virtual handovers. This paper demonstrates a new approach for subsea field development planning using a data-centric omnichannel digital platform called Subsea Studio™ FD, which is shown to deliver benefits over the conventional document-centric way of working. The digital platform brings multiple engineering disciplines together to configure optimal development solutions, accounting for competing objectives. It initiates the digital thread through the project lifecycle and will ultimately culminate in a digital twin during project execution, which can be leveraged throughout the life-of-field to optimize operations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Nesi, Paolo, Pierfrancesco Bellini und Ivan Bruno. „Graph Databases Lifecycle Methodology and Tool to Support Index/Store Versioning“. In The 21st International Conference on Distributed Multimedia Systems. KSI Research Inc. and Knowledge Systems Institute Graduate School, 2015. http://dx.doi.org/10.18293/dms2015-016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhou, H., Y. Q. Lu, W. D. Li, S. Lin, J. Y. H. Fuh, Y. S. Wong und Z. M. Qiu. „The Collaboration Abstraction Layer for Distributed CAD Development“. In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48280.

Der volle Inhalt der Quelle
Annotation:
In order to speed up the development of distributed CAD (DCAD) software applications and offer the end-users a friendly environment for collaborative design, Collaboration Abstraction Layer (CAL) is proposed. CAL aims to develop a pluggable software module that can be embedded into standalone CAD applications. Through summarizing and abstracting out the common characteristics of distributed CAD software, a set of foundation/helper classes for the important collaborative functionalities are enclosed in CAL, which include a 3D streaming service, a collaborative design management service, a constraint checking/solving service and a file versioning/baseline service. The 3D streaming service incorporates a geometrical simplification algorithm that supports selective refinement on level of details (LOD) model and a compact data structure represented in an XML format. The collaborative management service effectively schedules and manages a co-design job. The constraint checking/solving service, which composes of a design task dispatch interface, a collision detection algorithm, and an assembly constraint algorithm, coordinates designing and assembling based on constraints. The CAD file versioning/baseline service is to manage the history record of the CAD files and the milestones in the collaborative development process. By simulating the real collaborative design process, CAL designs a new collaboration mechanism which is different from most collaboration products in market. For the future potential development, CAL is built on an open-sourced software toolkit. It is coded to interfaces and kernel libraries so as to provide an immutable API for commonly used collaborative CAD functions. CAL enables rapid development of DCAD software, and minimizes application complexity by packaging the needed technology. Moreover, CAL is intending to be a partner to the current CAD software, not competitor, making it an ideal tool for future distributed CAD system development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie