Auswahl der wissenschaftlichen Literatur zum Thema „Data management“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Data management" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Data management"

1

Moreno, P., M. Ruiz und F. J. Gorines. „TBM Process Data Management System“. International Journal of Engineering and Technology 7, Nr. 5 (Dezember 2015): 431–34. http://dx.doi.org/10.7763/ijet.2015.v7.832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sashi, K., und Antony Selvadoss Thanamani. „Dynamic Replica Management for Data Grid“. International Journal of Engineering and Technology 2, Nr. 4 (2010): 329–33. http://dx.doi.org/10.7763/ijet.2010.v2.142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lee, Jonghyun Harry, Tyler Hesser, Matthew Farthing, Spicer Bak und Katherine DeVore. „SCALABLE REAL-TIME DATA ASSIMILATION WITH VARIOUS DATA TYPES FOR ACCURATE SPATIOTEMPORAL NEARSHORE BATHYMETRY ESTIMATION“. Coastal Engineering Proceedings, Nr. 37 (02.10.2023): 156. http://dx.doi.org/10.9753/icce.v37.management.156.

Der volle Inhalt der Quelle
Annotation:
Immediate estimation of nearshore bathymetry is crucial for accurate prediction of nearshore wave conditions and coastal flooding events. However, direct bathymetry data collection is expensive and time-consuming, while accurate airborne lidar-based survey is limited by breaking waves and decreased light penetration affected by water turbidity. Several recent efforts have been made to apply interpolation and inverse modeling approaches to indirect remote sensed observations along with sparse direct survey data points. Example indirect observations include video-based observations such as time-series snapshots and time-averaged (Timex) images across the surf zone taken from tower-based platforms and Unmanned Aircraft Systems (UASs), while stationary LiDAR tower and UAS flights with infrared camera capability or imagery-based structure-from-motion (SfM) algorithms have been used to provide beach topographic data. In this work, we present three bathymetry estimation tools for real-time nearshore characterization using different types of information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Meseck, Reed M. „Data management“. ACM SIGMOD Record 30, Nr. 2 (Juni 2001): 569–70. http://dx.doi.org/10.1145/376284.375745.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Thompson, Cheryl Bagley, und Edward A. Panacek. „Data Management“. Air Medical Journal 27, Nr. 4 (Juli 2008): 156–58. http://dx.doi.org/10.1016/j.amj.2008.05.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Marchant, David. „Data Management“. Museum Management and Curatorship 18, Nr. 2 (Januar 1999): 197–201. http://dx.doi.org/10.1080/09647779900801802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yasmeen, Mrs. „NOSQL Database Engines for Big Data Management“. International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31.10.2018): 617–22. http://dx.doi.org/10.31142/ijtsrd18608.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

McDonald, John. „Records Management and Data Management“. Records Management Journal 1, Nr. 1 (Januar 1989): 4–11. http://dx.doi.org/10.1108/eb027016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lagoze, Carl, William C. Block, Jeremy Williams, John Abowd und Lars Vilhuber. „Data Management of Confidential Data“. International Journal of Digital Curation 8, Nr. 1 (14.06.2013): 265–78. http://dx.doi.org/10.2218/ijdc.v8i1.259.

Der volle Inhalt der Quelle
Annotation:
Social science researchers increasingly make use of data that is confidential because it contains linkages to the identities of people, corporations, etc. The value of this data lies in the ability to join the identifiable entities with external data, such as genome data, geospatial information, and the like. However, the confidentiality of this data is a barrier to its utility and curation, making it difficult to fulfil US federal data management mandates and interfering with basic scholarly practices, such as validation and reuse of existing results. We describe the complexity of the relationships among data that span a public and private divide. We then describe our work on the CED2AR prototype, a first step in providing researchers with a tool that spans this divide and makes it possible for them to search, access and cite such data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Aiken, Peter, Mark Gillenson, Xihui Zhang und David Rafner. „Data Management and Data Administration“. Journal of Database Management 22, Nr. 3 (Juli 2011): 24–45. http://dx.doi.org/10.4018/jdm.2011070102.

Der volle Inhalt der Quelle
Annotation:
Data management (DM) has existed in conjunction with software development and the management of the full set of information technology (IT)-related components. However, it has been more than two decades since research into DM as it is practiced has been published. In this paper, the authors compare aspects of DM across a quarter-century timeline, obtaining data using comparable sets of subject matter experts. Using this information to observe the profession’s evolution, the authors have updated the understanding of DM as it is practiced, giving additional insight into DM, including its current responsibilities, reporting structures, and perceptions of success, among other factors. The analysis indicates that successfully investing in DM presents current, real challenges to IT and organizations. Although DM is evolving away from purely operational responsibilities toward higher-level responsibilities, perceptions of success have fallen. This paper details the quarter-century comparison of DM practices, analyzes them, and draws conclusions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Data management"

1

Morshedzadeh, Iman. „Data Classification in Product Data Management“. Thesis, Högskolan i Skövde, Institutionen för teknik och samhälle, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-14651.

Der volle Inhalt der Quelle
Annotation:
This report is about the product data classification methodology that is useable for the Volvo Cars Engine (VCE) factory's production data, and can be implemented in the Teamcenter software. There are many data generated during the life cycle of each product, and companies try to manage these data with some product data management software. Data classification is a part of data management for most effective and efficient use of data. With surveys that were done in this project, items affecting the data classification have been found. Data, attributes, classification method, Volvo Cars Engine factory and Teamcenter as the product data management software, are items that are affected data classification. In this report, all of these items will be explained separately. With the knowledge obtained about the above items, in the Volvo Cars Engine factory, the suitable hierarchical classification method is described. After defining the classification method, this method has been implemented in the software at the last part of the report to show that this method is executable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yang, Ying. „Interactive Data Management and Data Analysis“. Thesis, State University of New York at Buffalo, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10288109.

Der volle Inhalt der Quelle
Annotation:

Everyone today has a big data problem. Data is everywhere and in different formats, they can be referred to as data lakes, data streams, or data swamps. To extract knowledge or insights from the data or to support decision-making, we need to go through a process of collecting, cleaning, managing and analyzing the data. In this process, data cleaning and data analysis are two of the most important and time-consuming components.

One common challenge in these two components is a lack of interaction. The data cleaning and data analysis are typically done as a batch process, operating on the whole dataset without any feedback. This leads to long, frustrating delays during which users have no idea if the process is effective. Lacking interaction, human expert effort is needed to make decisions on which algorithms or parameters to use in the systems for these two components.

We should teach computers to talk to humans, not the other way around. This dissertation focuses on building systems --- Mimir and CIA --- that help user conduct data cleaning and analysis through interaction. Mimir is a system that allows users to clean big data in a cost- and time-efficient way through interaction, a process I call on-demand ETL. Convergent inference algorithms (CIA) are a family of inference algorithms in probabilistic graphical models (PGM) that enjoys the benefit of both exact and approximate inference algorithms through interaction.

Mimir provides a general language for user to express different data cleaning needs. It acts as a shim layer that wraps around the database making it possible for the bulk of the ETL process to remain within a classical deterministic system. Mimir also helps users to measure the quality of an analysis result and provides rankings for cleaning tasks to improve the result quality in a cost efficient manner. CIA focuses on providing user interaction through the process of inference in PGMs. The goal of CIA is to free users from the upfront commitment to either approximate or exact inference, and provide user more control over time/accuracy trade-offs to direct decision-making and computation instance allocations. This dissertation describes the Mimir and CIA frameworks to demonstrate that it is feasible to build efficient interactive data management and data analysis systems.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mathew, Avin D. „Asset management data warehouse data modelling“. Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/19310/1/Avin_Mathew_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Data are the lifeblood of an organisation, being employed by virtually all business functions within a firm. Data management, therefore, is a critical process in prolonging the life of a company and determining the success of each of an organisation’s business functions. The last decade and a half has seen data warehousing rising in priority within corporate data management as it provides an effective supporting platform for decision support tools. A cross-sectional survey conducted by this research showed that data warehousing is starting to be used within organisations for their engineering asset management, however the industry uptake is slow and has much room for development and improvement. This conclusion is also evidenced by the lack of systematic scholarly research within asset management data warehousing as compared to data warehousing for other business areas. This research is motivated by the lack of dedicated research into asset management data warehousing and attempts to provide original contributions to the area, focussing on data modelling. Integration is a fundamental characteristic of a data warehouse and facilitates the analysis of data from multiple sources. While several integration models exist for asset management, these only cover select areas of asset management. This research presents a novel conceptual data warehousing data model that integrates the numerous asset management data areas. The comprehensive ethnographic modelling methodology involved a diverse set of inputs (including data model patterns, standards, information system data models, and business process models) that described asset management data. Used as an integrated data source, the conceptual data model was verified by more than 20 experts in asset management and validated against four case studies. A large section of asset management data are stored in a relational format due to the maturity and pervasiveness of relational database management systems. Data warehousing offers the alternative approach of structuring data in a dimensional format, which suggests increased data retrieval speeds in addition to reducing analysis complexity for end users. To investigate the benefits of moving asset management data from a relational to multidimensional format, this research presents an innovative relational vs. multidimensional model evaluation procedure. To undertake an equitable comparison, the compared multidimensional are derived from an asset management relational model and as such, this research presents an original multidimensional modelling derivation methodology for asset management relational models. Multidimensional models were derived from the relational models in the asset management data exchange standard, MIMOSA OSA-EAI. The multidimensional and relational models were compared through a series of queries. It was discovered that multidimensional schemas reduced the data size and subsequently data insertion time, decreased the complexity of query conceptualisation, and improved the query execution performance across a range of query types. To facilitate the quicker uptake of these data warehouse multidimensional models within organisations, an alternate modelling methodology was investigated. This research presents an innovative approach of using a case-based reasoning methodology for data warehouse schema design. Using unique case representation and indexing techniques, the system also uses a business vocabulary repository to augment case searching and adaptation. The system was validated through a case-study where multidimensional schema design speed and accuracy was measured. It was found that the case-based reasoning system provided a marginal benefit, with a greater benefits gained when confronted with more difficult scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mathew, Avin D. „Asset management data warehouse data modelling“. Queensland University of Technology, 2008. http://eprints.qut.edu.au/19310/.

Der volle Inhalt der Quelle
Annotation:
Data are the lifeblood of an organisation, being employed by virtually all business functions within a firm. Data management, therefore, is a critical process in prolonging the life of a company and determining the success of each of an organisation’s business functions. The last decade and a half has seen data warehousing rising in priority within corporate data management as it provides an effective supporting platform for decision support tools. A cross-sectional survey conducted by this research showed that data warehousing is starting to be used within organisations for their engineering asset management, however the industry uptake is slow and has much room for development and improvement. This conclusion is also evidenced by the lack of systematic scholarly research within asset management data warehousing as compared to data warehousing for other business areas. This research is motivated by the lack of dedicated research into asset management data warehousing and attempts to provide original contributions to the area, focussing on data modelling. Integration is a fundamental characteristic of a data warehouse and facilitates the analysis of data from multiple sources. While several integration models exist for asset management, these only cover select areas of asset management. This research presents a novel conceptual data warehousing data model that integrates the numerous asset management data areas. The comprehensive ethnographic modelling methodology involved a diverse set of inputs (including data model patterns, standards, information system data models, and business process models) that described asset management data. Used as an integrated data source, the conceptual data model was verified by more than 20 experts in asset management and validated against four case studies. A large section of asset management data are stored in a relational format due to the maturity and pervasiveness of relational database management systems. Data warehousing offers the alternative approach of structuring data in a dimensional format, which suggests increased data retrieval speeds in addition to reducing analysis complexity for end users. To investigate the benefits of moving asset management data from a relational to multidimensional format, this research presents an innovative relational vs. multidimensional model evaluation procedure. To undertake an equitable comparison, the compared multidimensional are derived from an asset management relational model and as such, this research presents an original multidimensional modelling derivation methodology for asset management relational models. Multidimensional models were derived from the relational models in the asset management data exchange standard, MIMOSA OSA-EAI. The multidimensional and relational models were compared through a series of queries. It was discovered that multidimensional schemas reduced the data size and subsequently data insertion time, decreased the complexity of query conceptualisation, and improved the query execution performance across a range of query types. To facilitate the quicker uptake of these data warehouse multidimensional models within organisations, an alternate modelling methodology was investigated. This research presents an innovative approach of using a case-based reasoning methodology for data warehouse schema design. Using unique case representation and indexing techniques, the system also uses a business vocabulary repository to augment case searching and adaptation. The system was validated through a case-study where multidimensional schema design speed and accuracy was measured. It was found that the case-based reasoning system provided a marginal benefit, with a greater benefits gained when confronted with more difficult scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sehat, Mahdis, und FLORES RENÉ PAVEZ. „Customer Data Management“. Thesis, KTH, Industriell ekonomi och organisation (Avd.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-109251.

Der volle Inhalt der Quelle
Annotation:
Abstract As the business complexity, number of customers continues to grow and customers evolve into multinational organisations that operate across borders, many companies are faced with great challenges in the way they manage their customer data. In today’s business, a single customer may have a relationship with several entities of an organisation, which means that the customer data is collected through different channels. One customer may be described in different ways by each entity, which makes it difficult to obtain a unified view of the customer. In companies where there are several sources of data and the data is distributed to several systems, data environments become heterogenic. In this state, customer data is often incomplete, inaccurate and inconsistent throughout the company. This thesis aims to study how organisations with heterogeneous customer data sources implement the Master Data Management (MDM) concept to achieve and maintain high customer data quality. The purpose is to provide recommendations for how to achieve successful customer data management using MDM based on existing literature related to the topic and an interview-based empirical study. Successful customer data management is more of an organisational issue than a technological one and requires a top-down approach in order to develop a common strategy for an organisation’s customer data management. Proper central assessment and maintenance processes that can be adjusted according to the entities’ needs must be in place. Responsibilities for the maintenance of customer data should be delegated to several levels of an organisation in order to better manage customer data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Scott, Mark. „Research data management“. Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/374711/.

Der volle Inhalt der Quelle
Annotation:
Scientists within the materials engineering community produce a wide variety of data, ranging from large 3D volume densitometry files (voxel) generated by microfocus computer tomography (μCT) to simple text files containing results from tensile tests. Increasingly they need to share this data as part of international collaborations. The design of a suitable database schema and the architecture of a flexible system that can cope with the varying information is a continuing problem in the management of heterogeneous data. We discuss the issues with managing such varying data, and present a model flexible enough to meet users’ diverse requirements. Metadata is held using a database and its design allows users to control their own data structures. Data is held in a file store which, in combination with the metadata, gives huge flexibility and means the model is limited only by the file system. Using examples from materials engineering and medicine we illustrate how the model can be applied. We will also discuss how this data model can be used to support an institutional document repository, showing how data can be published in a remote data repository at the same time as a publication is deposited in a document repository. Finally, we present educational material used to introduce the concepts of research data management. Educating students about the challenges and opportunities of data management is a key part of the solution and helps the researchers of the future to start to think about the relevant issues early on in their careers. We have compiled a set of case studies to show the similarities and differences in data between disciplines, and produced documentation for students containing the case studies and an introduction to the data lifecycle and other data management practices. Managing in-use data and metadata is just as important to users as published data. Appropriate education of users and a data staging repository with a flexible and extensible data model supports this without precluding the ability to publish the data at a later date.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tran, Viet-Trung. „Scalable data-management systems for Big Data“. Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00920432.

Der volle Inhalt der Quelle
Annotation:
Big Data can be characterized by 3 V's. * Big Volume refers to the unprecedented growth in the amount of data. * Big Velocity refers to the growth in the speed of moving data in and out management systems. * Big Variety refers to the growth in the number of different data formats. Managing Big Data requires fundamental changes in the architecture of data management systems. Data storage should continue being innovated in order to adapt to the growth of data. They need to be scalable while maintaining high performance regarding data accesses. This thesis focuses on building scalable data management systems for Big Data. Our first and second contributions address the challenge of providing efficient support for Big Volume of data in data-intensive high performance computing (HPC) environments. Particularly, we address the shortcoming of existing approaches to handle atomic, non-contiguous I/O operations in a scalable fashion. We propose and implement a versioning-based mechanism that can be leveraged to offer isolation for non-contiguous I/O without the need to perform expensive synchronizations. In the context of parallel array processing in HPC, we introduce Pyramid, a large-scale, array-oriented storage system. It revisits the physical organization of data in distributed storage systems for scalable performance. Pyramid favors multidimensional-aware data chunking, that closely matches the access patterns generated by applications. Pyramid also favors a distributed metadata management and a versioning concurrency control to eliminate synchronizations in concurrency. Our third contribution addresses Big Volume at the scale of the geographically distributed environments. We consider BlobSeer, a distributed versioning-oriented data management service, and we propose BlobSeer-WAN, an extension of BlobSeer optimized for such geographically distributed environments. BlobSeer-WAN takes into account the latency hierarchy by favoring locally metadata accesses. BlobSeer-WAN features asynchronous metadata replication and a vector-clock implementation for collision resolution. To cope with the Big Velocity characteristic of Big Data, our last contribution feautures DStore, an in-memory document-oriented store that scale vertically by leveraging large memory capability in multicore machines. DStore demonstrates fast and atomic complex transaction processing in data writing, while maintaining high throughput read access. DStore follows a single-threaded execution model to execute update transactions sequentially, while relying on a versioning concurrency control to enable a large number of simultaneous readers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schnyder, Martin. „Web 2.0 data management“. Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Department of Computer Science, Institute of Information Systems, Global Information Systems Group, 2008. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

He, Ying Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. „Spatial data quality management“. Publisher:University of New South Wales. Surveying & Spatial Information Systems, 2008. http://handle.unsw.edu.au/1959.4/43323.

Der volle Inhalt der Quelle
Annotation:
The applications of geographic information systems (GIS) in various areas have highlighted the importance of data quality. Data quality research has been given a priority by GIS academics for three decades. However, the outcomes of data quality research have not been sufficiently translated into practical applications. Users still need a GIS capable of storing, managing and manipulating data quality information. To fill this gap, this research aims to investigate how we can develop a tool that effectively and efficiently manages data quality information to aid data users to better understand and assess the quality of their GIS outputs. Specifically, this thesis aims: 1. To develop a framework for establishing a systematic linkage between data quality indicators and appropriate uncertainty models; 2. To propose an object-oriented data quality model for organising and documenting data quality information; 3. To create data quality schemas for defining and storing the contents of metadata databases; 4. To develop a new conceptual model of data quality management; 5. To develop and implement a prototype system for enhancing the capability of data quality management in commercial GIS. Based on reviews of error and uncertainty modelling in the literature, a conceptual framework has been developed to establish the systematic linkage between data quality elements and appropriate error and uncertainty models. To overcome the limitations identified in the review and satisfy a series of requirements for representing data quality, a new object-oriented data quality model has been proposed. It enables data quality information to be documented and stored in a multi-level structure and to be integrally linked with spatial data to allow access, processing and graphic visualisation. The conceptual model for data quality management is proposed where a data quality storage model, uncertainty models and visualisation methods are three basic components. This model establishes the processes involved when managing data quality, emphasising on the integration of uncertainty modelling and visualisation techniques. The above studies lay the theoretical foundations for the development of a prototype system with the ability to manage data quality. Object-oriented approach, database technology and programming technology have been integrated to design and implement the prototype system within the ESRI ArcGIS software. The object-oriented approach allows the prototype to be developed in a more flexible and easily maintained manner. The prototype allows users to browse and access data quality information at different levels. Moreover, a set of error and uncertainty models are embedded within the system. With the prototype, data quality elements can be extracted from the database and automatically linked with the appropriate error and uncertainty models, as well as with their implications in the form of simple maps. This function results in proposing a set of different uncertainty models for users to choose for assessing how uncertainty inherent in the data can affect their specific application. It will significantly increase the users' confidence in using data for a particular situation. To demonstrate the enhanced capability of the prototype, the system has been tested against the real data. The implementation has shown that the prototype can efficiently assist data users, especially non-expert users, to better understand data quality and utilise it in a more practical way. The methodologies and approaches for managing quality information presented in this thesis should serve as an impetus for supporting further research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Voigt, Hannes. „Flexibility in Data Management“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-136681.

Der volle Inhalt der Quelle
Annotation:
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Data management"

1

Cooper, Richard, und Jessie Kennedy, Hrsg. Data Management. Data, Data Everywhere. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73390-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Purba, Sanjiv. Data Management. 3. Aufl. Boca Raton: Auerbach Publications, 2021. http://dx.doi.org/10.1201/9780429114878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gronwald, Klaus-Dieter. Data Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68668-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Data management and data description. Aldershot, Hants, England: Ashgate, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mahanti, Rupa. Data Governance and Data Management. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3583-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Data management and data description. Aldershot, Hants, England: Ashgate, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

K, Rondel R., Varley S. A und Webb C. F, Hrsg. Clinical data management. Chichester [England]: Wiley, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Web data management. New York: Cambridge University Press, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mamoulis, Nikos. Spatial Data Management. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-01884-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Golab, Lukasz, und M. Tamer Özsu. Data Stream Management. Cham: Springer International Publishing, 2010. http://dx.doi.org/10.1007/978-3-031-01837-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Data management"

1

Bingham, John. „Data Management“. In Data Processing, 157–80. London: Macmillan Education UK, 1989. http://dx.doi.org/10.1007/978-1-349-19938-9_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Swart, William. „Restaurant Management“. In Data Analytics, 51–67. Boca Raton, FL : CRC Press/Taylor & Francis Group, 2019. |: Auerbach Publications, 2019. http://dx.doi.org/10.1201/9781315267555-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hallstrom, Elyse. „Inventory Management“. In Data Analytics, 91–104. Boca Raton, FL : CRC Press/Taylor & Francis Group, 2019. |: Auerbach Publications, 2019. http://dx.doi.org/10.1201/9781315267555-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lee, T. Y., Michael Minor und Lionel D. Edwards. „Data Management“. In Principles and Practice of Pharmaceutical Medicine, 368–78. Oxford, UK: Wiley-Blackwell, 2010. http://dx.doi.org/10.1002/9781444325263.ch29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Andrews, Joshua Kelly, Karen M. Cheek, Matthew P. Jennings, David W. Rogers und Vincent M. Walden. „Data Management“. In Litigation Services Handbook, 1–33. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119204794.ch14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Alagić, Suad. „Data Management“. In Software Engineering: Specification, Implementation, Verification, 113–37. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61518-9_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Muenchen, Robert A. „Data Management“. In Statistics and Computing, 219–373. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-0685-3_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bingham, John, und Garth Davies. „Data Management“. In Systems Analysis, 212–35. London: Macmillan Education UK, 1992. http://dx.doi.org/10.1007/978-1-349-12833-4_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Müllner, Marcus. „Data Management“. In Erfolgreich wissenschaftlich Arbeiten in der Klinik, 135–38. Vienna: Springer Vienna, 2002. http://dx.doi.org/10.1007/978-3-7091-3755-0_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Muenchen, Robert A. „Data Management“. In Statistics and Computing, 147–224. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-09418-2_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Data management"

1

„Data Management“. In CLADE 2005. Proceedings Challenges of Large Applications in Distributed Environments, 2005. IEEE, 2005. http://dx.doi.org/10.1109/clade.2005.1520894.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Meseck, Reed M. „Data management“. In the 2001 ACM SIGMOD international conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/375663.375745.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grillenberger, Andreas. „Big data and data management“. In the tenth annual conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2632320.2632325.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dewilde, P., und J. Annevelink. „VLSI Data-Management“. In Twelfth European Solid-State Circuits Conference. IEEE, 1986. http://dx.doi.org/10.1109/esscirc.1986.5468402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

„Data Management II“. In HPDC-14. Proceedings. 14th IEEE International Symposium on High Performance Distributed Computing, 2005. IEEE, 2005. http://dx.doi.org/10.1109/hpdc.2005.1520951.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

„Data Management II“. In CLADE 2005. Proceedings Challenges of Large Applications in Distributed Environments, 2005. IEEE, 2005. http://dx.doi.org/10.1109/clade.2005.1520910.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Indrawan, Maria. „Grid data management“. In the 12th International Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1967486.1967491.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Drakeley, Brian, Svein Omdal und Sigurd Moe. „Subsea Data Management“. In Offshore Technology Conference. Offshore Technology Conference, 2007. http://dx.doi.org/10.4043/18744-ms.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lu, Hua, und Muhammad Aamir Cheema. „Indoor data management“. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016. http://dx.doi.org/10.1109/icde.2016.7498358.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cafarella, Michael J., und Alon Y. Halevy. „Web data management“. In the 2011 international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1989323.1989452.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Data management"

1

Augustine, Victor. Data management. ResearchHub Technologies, Inc., April 2022. http://dx.doi.org/10.55277/researchhub.nm21xcav.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rambo, Neil. Research Data Management. New York: Ithaka S+R, Oktober 2015. http://dx.doi.org/10.18665/sr.274643.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

García-Espinosa, J., und C. Soriano. Data management plan. Scipedia, 2021. http://dx.doi.org/10.23967/prodphd.2021.9.003.

Der volle Inhalt der Quelle
Annotation:
This document presents the deliverable D8.1 – the Data Management Plan (DMP) of work package 8 of the prodPhD project. It aims to present the plan for the management, generation, collection, security, preservation and sharing of data generated through the prodPhD project. The DMP is a key element for organizing the project’s data. It provides an analysis of the data, which will be collected, processed and published by the prodPhD consortium. The project embraces the initiatives of the European Commission to promote the open access to research data, aiming to improve and maximize access to and reuse of research data generated by Horizon 2020 projects. In this sense prodPhD will adhere to the Open Research Data Pilot (ORD Pilot) fostered by the European Commission, and this DMP will be developed following the standards of data storage, access and management. This plan will detail what data will be generated through the project, whether and how it will be made accessible for the verification and reuse and how it will be curated and preserved. In this context, the term data applies to the information generated during the different experimental campaigns carried out in the project, and specifically to the data, including associated metadata, to be used to validate the computational models and the technical solutions to be developed in the project. This document is the first version of the DMP and may be updated throughout the project, if significant changes (new data, changes in consortium policies, changes in consortium composition, etc.) arise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Paglialonga, Lisa, und Carsten Schirnick. Data management plan. OceanNETs, Juni 2022. http://dx.doi.org/10.3289/oceannets_d8.1.

Der volle Inhalt der Quelle
Annotation:
This is the data management plan for the research project OceanNETs. It compiles OceanNETs research data output and describes the data handling during and after the projects duration with the aim to make OceanNETs research data FAIR – sustainably available for the scientific community. This data management plan is a living document; it will be continuously developed in close cooperation with the consortium members throughout the project duration
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bishop, Bradley Wade. Data from Data Management Plan Compliance. University of Tennessee, Knoxville Libraries, Januar 2020. http://dx.doi.org/10.7290/pebuwhcq7l.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fermi Research Alliance, Fermi Alliance. Simons Foundation Data Management. Office of Scientific and Technical Information (OSTI), Januar 2017. http://dx.doi.org/10.2172/1568825.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Novellino, Antonio, George Petihakis und Joaquin Tintore. DMP: Data Management Plan. EuroSea, Mai 2020. http://dx.doi.org/10.3289/eurosea_d3.1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

McCullouch, Bob. Construction Data Management 2000. West Lafayette, IN: Purdue University, 2000. http://dx.doi.org/10.5703/1288284313164.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Paglialonga, Lisa, und Carsten Schirnick. OceanNETs Data Management Plan. OceanNETs, Dezember 2020. http://dx.doi.org/10.3289/oceannets_dmp_v1.

Der volle Inhalt der Quelle
Annotation:
This is the data management plan for the research project OceanNETs. It compiles OceanNETs research data output and describes the data handling during and after the projects duration with the aim to make OceanNETs research data FAIR – sustainably available for the scientific community. This data management plan is a living document; it will be continously developed in close cooperation with the consortium members throughout the project duration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Soriano, C., R. Rossi und Q. Ayoul-Guilmard. D8.1 Data Management Plan. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.019.

Der volle Inhalt der Quelle
Annotation:
The ExaQUte project participates in the Pilot on Open Research Data launched by the European Commission (EC) along with the H2020 program. This pilot is part of the Open Access to Scientific Publications and Research Data program in H2020. The goal of the program is to foster access to research data generated in H2020 projects. The use of a Data anagement Plan (DMP) is required for all projects participating in the Open Research Data Pilot, in which they will specify what data will be kept for the longer term. The underpinning idea is that Horizon 2020 beneficiaries have to make their research data findable, accessible, interoperable and re-usable (FAIR), to ensure it is soundly managed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie