Auswahl der wissenschaftlichen Literatur zum Thema „Flexible Metadata Format“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Flexible Metadata Format" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Flexible Metadata Format"

1

Ma, Li Ming, Zhi Wu Su und San Xing Cao. „A Study of Fragmentation and Reorganization Mechanism in Video Production and Distribution Process“. Applied Mechanics and Materials 411-414 (September 2013): 974–77. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.974.

Der volle Inhalt der Quelle
Annotation:
Fragmentation is the most important trends across all media platforms, with the aim of reorganizing and disseminating information in a more personalized way according to the "Long Tail Demand" theory. This paper proposes a fragments reorganizing plan, which considering sequential video, UGC video, graphic, text and social media information as the equivalent information source, utilizing a unified metadata format to describe media resources multi-dimensionally. The implementation of virtual segmenting for video allows a more flexible control of the granularity and achieves a rich content presentation and interactive behavior when combined with interactive scripts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dunnington, Dewey W., und Ian S. Spooner. „Using a linked table-based structure to encode self-describing multiparameter spatiotemporal data“. FACETS 3, Nr. 1 (01.10.2018): 326–37. http://dx.doi.org/10.1139/facets-2017-0026.

Der volle Inhalt der Quelle
Annotation:
Multiparameter data with both spatial and temporal components are critical to advancing the state of environmental science. These data and data collected in the future are most useful when compared with each other and analyzed together, which is often inhibited by inconsistent data formats and a lack of structured documentation provided by researchers and (or) data repositories. In this paper we describe a linked table-based structure that encodes multiparameter spatiotemporal data and their documentation that is both flexible (able to store a wide variety of data sets) and usable (can easily be viewed, edited, and converted to plottable formats). The format is a collection of five tables (Data, Locations, Params, Data Sets, and Columns), on which restrictions are placed to ensure data are represented consistently from multiple sources. These tables can be stored in a variety of ways including spreadsheet files, comma-separated value (CSV) files, JavaScript object notation (JSON) files, databases, or objects in a software environment such as R or Python. A toolkit for users of R statistical software was also developed to facilitate converting data to and from the data format. We have used this format to combine data from multiple sources with minimal metadata loss and to effectively archive and communicate the results of spatiotemporal studies. We believe that this format and associated discussion of data and data storage will facilitate increased synergies between past, present, and future data sets in the environmental science community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Taubert, Jan, Klaus Peter Sieren, Matthew Hindle, Berend Hoekman, Rainer Winnenburg, Stephan Philippi, Chris Rawlings und Jacob Köhler. „The OXL format for the exchange of integrated datasets“. Journal of Integrative Bioinformatics 4, Nr. 3 (01.12.2007): 27–40. http://dx.doi.org/10.1515/jib-2007-62.

Der volle Inhalt der Quelle
Annotation:
Abstract A prerequisite for systems biology is the integration and analysis of heterogeneous experimental data stored in hundreds of life-science databases and millions of scientific publications. Several standardised formats for the exchange of specific kinds of biological information exist. Such exchange languages facilitate the integration process; however they are not designed to transport integrated datasets. A format for exchanging integrated datasets needs to i) cover data from a broad range of application domains, ii) be flexible and extensible to combine many different complex data structures, iii) include metadata and semantic definitions, iv) include inferred information, v) identify the original data source for integrated entities and vi) transport large integrated datasets. Unfortunately, none of the exchange formats from the biological domain (e.g. BioPAX, MAGE-ML, PSI-MI, SBML) or the generic approaches (RDF, OWL) fulfil these requirements in a systematic way.We present OXL, a format for the exchange of integrated data sets, and detail how the aforementioned requirements are met within the OXL format. OXL is the native format within the data integration and text mining system ONDEX. Although OXL was developed with the ONDEX system in mind, it also has the potential to be used in several other biological and non-biological applications described in this paper.Availability: The OXL format is an integral part of the ONDEX system which is freely available under the GPL at http://ondex.sourceforge.net/. Sample files can be found at http://prdownloads.sourceforge.net/ondex/ and the XML Schema at http://ondex.svn.sf.net/viewvc/*checkout*/ondex/trunk/backend/data/xml/ondex.xsd.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alter, George. „Reflections on the Intermediate Data Structure (IDS)“. Historical Life Course Studies 10 (31.03.2021): 71–75. http://dx.doi.org/10.51964/hlcs9570.

Der volle Inhalt der Quelle
Annotation:
The Intermediate Data Structure (IDS) encourages sharing historical life course data by storing data in a common format. To encompass the complexity of life histories, IDS relies on data structures that are unfamiliar to most social scientists. This article examines four features of IDS that make it flexible and expandable: the Entity-Attribute-Value model, the relational database model, embedded metadata, and the Chronicle file. I also consider IDS from the perspective of current discussions about sharing data across scientific domains. We can find parallels to IDS in other fields that may lead to future innovations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lazarus, David B., Johan Renaudie, Dorina Lenz, Patrick Diver und Jens Klump. „Raritas: a program for counting high diversity categorical data with highly unequal abundances“. PeerJ 6 (09.10.2018): e5453. http://dx.doi.org/10.7717/peerj.5453.

Der volle Inhalt der Quelle
Annotation:
Acquiring data on the occurrences of many types of difficult to identify objects are often still made by human observation, for example, in biodiversity and paleontologic research. Existing computer counting programs used to record such data have various limitations, including inflexibility and cost. We describe a new open-source program for this purpose—Raritas. Raritas is written in Python and can be run as a standalone app for recent versions of either MacOS or Windows, or from the command line as easily customized source code. The program explicitly supports a rare category count mode which makes it easier to collect quantitative data on rare categories, for example, rare species which are important in biodiversity surveys. Lastly, we describe the file format used by Raritas and propose it as a standard for storing geologic biodiversity data. ‘Stratigraphic occurrence data’ file format combines extensive sample metadata and a flexible structure for recording occurrence data of species or other categories in a series of samples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sauer, Simon, und Anke Lüdeling. „Flexible multi-layer spoken dialogue corpora“. Compilation, transcription, markup and annotation of spoken corpora 21, Nr. 3 (19.09.2016): 419–38. http://dx.doi.org/10.1075/ijcl.21.3.06sau.

Der volle Inhalt der Quelle
Annotation:
This paper describes the construction of deeply annotated spoken dialogue corpora. To ensure a maximum of flexibility — in the degree of normalization, the types and formats of annotations, the possibilities for modifying and extending the corpus, or the use for research questions not originally anticipated — we propose a flexible multi-layer standoff architecture. We also take a closer look at the interoperability of tools and formats compatible with such an architecture. Free access to the corpus data through corpus queries, visualizations, and downloads — including documentation, metadata, and the original recordings — enables transparency, verifiability, and reproducibility of every step of interpretation throughout corpus construction and of any research findings obtained from this data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bhat, Talapady. „Rule and Root-based Metadata-Ecosystem for Structural Bioinformatics & Facebook“. Acta Crystallographica Section A Foundations and Advances 70, a1 (05.08.2014): C496. http://dx.doi.org/10.1107/s2053273314095035.

Der volle Inhalt der Quelle
Annotation:
Despite the widespread efforts to develop flexible formats such as PDB, mmCIF, CIF., to store and exchange data, the lack of best practice metadata pose major challenges. Readily adoptable methods with demonstrated usability across multiple solutions to create on-demand metadata are critical for the effective archive and exchange of data in a user-centric fashion. It is important that there exists a metadata-ecosystem where metadata of all structural and biological research evolve synchronously. Previously we described (Chem-BLAST, http://xpdb.nist.gov/chemblast/pdb.pl) a new `root' based concept used in language development (Latin & Sanskrit) to simplify the selection or creation of terms for metadata for millions of chemical structures from the PDB and the PubChem. Subsequently we extended it to text-based data on Cell-image-data (BMC, doi:10.1186/1471-2105-12-487). Here we describe further extension of this concept by creating roots and rules to define an ecosystem for composing new or modifying existing metadata for demonstrated inter-operability. A major focus of the rules is to ensure that the metadata terms are self-explaining (intuitive), highly-reused to describe many experiments and also that they are usable in a federated environment to construct new use-cases. We illustrate the use of this concept to compose semantic terminology for a wide range of disciplines ranging from material science to biology. Examples of the use of such metadata to create demonstrated solutions to describe data on cell-image data will also be presented. I will present ideas and examples to foster discussion on metadata architecture a) that is independent of formats and b) that is better suited for a federated environment c) that could be used readily to build components such as resource description framework (RDF) and Web services for Semantic web.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wickett, Karen. „A logic-based framework for collection/item metadata relationships“. Journal of Documentation 74, Nr. 6 (08.10.2018): 1175–89. http://dx.doi.org/10.1108/jd-01-2018-0017.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to present a framework for the articulation of relationships between collection-level and item-level metadata as logical inference rules. The framework is intended to allow the systematic generation of relevant propagation rules and to enable the assessment of those rules for particular contexts and the translation of rules into algorithmic processes. Design/methodology/approach The framework was developed using first order predicate logic. Relationships between collection-level and item-level description are expressed as propagation rules – inference rules where the properties of one entity entail conclusions about another entity in virtue of a particular relationship those individuals bear to each other. Propagation rules for reasoning between the collection and item level are grouped together in the framework according to their logical form as determined by the nature of the propagation action and the attributes involved in the rule. Findings The primary findings are the analysis of relationships between collection-level and item-level metadata, and the framework of categories of propagation rules. In order to fully develop the framework, the paper includes an analysis of colloquial metadata records and the collection membership relation that provides a general method for the translation of metadata records into formal knowledge representation languages. Originality/value The method for formalizing metadata records described in the paper represents significant progress in the application of knowledge representation techniques to problems of metadata creation and management, providing a flexible technique for encoding colloquial metadata as a set of statements in first-order logic. The framework of rules for collection/item metadata relationships has a range of potential applications for the enhancement or metadata systems and vocabularies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Heikenfeld, Max, Peter J. Marinescu, Matthew Christensen, Duncan Watson-Parris, Fabian Senf, Susan C. van den Heever und Philip Stier. „tobac 1.2: towards a flexible framework for tracking and analysis of clouds in diverse datasets“. Geoscientific Model Development 12, Nr. 11 (30.10.2019): 4551–70. http://dx.doi.org/10.5194/gmd-12-4551-2019.

Der volle Inhalt der Quelle
Annotation:
Abstract. We introduce tobac (Tracking and Object-Based Analysis of Clouds), a newly developed framework for tracking and analysing individual clouds in different types of datasets, such as cloud-resolving model simulations and geostationary satellite retrievals. The software has been designed to be used flexibly with any two- or three-dimensional time-varying input. The application of high-level data formats, such as Iris cubes or xarray arrays, for input and output allows for convenient use of metadata in the tracking analysis and visualisation. Comprehensive analysis routines are provided to derive properties like cloud lifetimes or statistics of cloud properties along with tools to visualise the results in a convenient way. The application of tobac is presented in two examples. We first track and analyse scattered deep convective cells based on maximum vertical velocity and the three-dimensional condensate mixing ratio field in cloud-resolving model simulations. We also investigate the performance of the tracking algorithm for different choices of time resolution of the model output. In the second application, we show how the framework can be used to effectively combine information from two different types of datasets by simultaneously tracking convective clouds in model simulations and in geostationary satellite images based on outgoing longwave radiation. The tobac framework provides a flexible new way to include the evolution of the characteristics of individual clouds in a range of important analyses like model intercomparison studies or model assessment based on observational data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rasmussen, Karsten Boye. „As open as possible and as closed as needed“. IASSIST Quarterly 43, Nr. 3 (26.09.2019): 1–2. http://dx.doi.org/10.29173/iq965.

Der volle Inhalt der Quelle
Annotation:
Welcome to the third issue of volume 43 of the IASSIST Quarterly (IQ 43:3, 2019). Yes, we are open! Open data is good. Just a click away. Downloadable 24/7 for everybody. An open government would make the decisionmakers’ data open to the public and the opposition. As an example, communal data on bicycle paths could be open, so more navigation apps would flourish and embed the information in maps, which could suggest more safe bicycle routes. However, as demonstrated by all three articles in this IQ issue, very often research data include information that requires restrictions concerning data access. The second paper states that data should be ‘as open as possible and as closed as needed’. This phrase originates from a European Union Horizon 2020 project called the Open Research Data Pilot, in ‘Guidelines on FAIR Data Management in Horizon 2020’ (July 2016). Some data need to be closed and not freely available. So once more it shows that a simple solution of total openness and one-size-fits-all is not possible. We have to deal with more complicated schemes depending on the content of data. Luckily, experienced people at data institutions are capable of producing adapted solutions. The first article ‘Restricting data’s use: A spectrum of concerns in need of flexible approaches’ describes how data producers have legitimate needs for restricting data access for users. This understanding is quite important as some users might have an automatic objection towards all restrictions on use of data. The authors Dharma Akmon and Susan Jekielek are at ICPSR at the University of Michigan. ICPSR has been the U.S. research archive since 1962, so they have much practice in long-term storage of digital information. From a short-term perspective you might think that their primary task is to get the data in use and thus would be opposed to any kind of access restrictions. However, both producers and custodians of data are very well aware of their responsibility for determining restrictions and access. The caveat concerns the potential harm through disclosure, often exemplified by personal data of identifiable individuals. The article explains how dissemination options differ in where data are accessed and what is required for access. If you are new to IASSIST, the article also gives an excellent short introduction to ICPSR and how this institution guards itself and its users against the hazards of data sharing. In the second article ‘Managing data in cross-institutional projects’, the reader gains insight into how FAIR data usage benefits a cross-institutional project. The starting point for the authors - Zaza Nadja Lee Hansen, Filip Kruse, and Jesper Boserup Thestrup – is the FAIR principles that data should be: findable, accessible, interoperable, and re-useable. The authors state that this implies that the data should be as open as possible. However, as expressed in the ICPSR article above, data should at the same time be as closed as needed. Within the EU, the mention of GDPR (General Data Protection Regulation) will always catch the attention of the economical responsible at any institution because data breaches can now be very severely fined. The authors share their experience with implementation of the FAIR principles with data from several cross-institutional projects. The key is to ensure that from the beginning there is agreement on following the specific guidelines, standards and formats throughout the project. The issues to agree on are, among other things, storage and sharing of data and metadata, responsibilities for updating data, and deciding which data format to use. The benefits of FAIR data usage are summarized, and the article also describes the cross-institutional projects. The authors work as a senior consultant/project manager at the Danish National Archives, senior advisor at The Royal Danish Library, and communications officer at The Royal Danish Library. The cross-institutional projects mentioned here stretch from Kierkegaard’s writings to wind energy. While this issue started by mentioning that ICPSR was founded in 1962, we end with a more recent addition to the archive world, established at Qatar University’s Social and Economic Survey Research Institute (SESRI) in 2017. The paper ‘Data archiving for dissemination within a Gulf nation’ addresses the experience of this new institution in an environment of cultural and political sensitivity. With a positive view you can regard the benefits as expanding. The start is that archive staff get experience concerning policies for data selection, restrictions, security and metadata. This generates benefits and expands to the broader group of research staff where awareness and improvements relate to issues like design, collection and documentation of studies. Furthermore, data sharing can be seen as expanding in the Middle East and North Africa region and generating a general improvement in the relevance and credibility of statistics generated in the region. Again, the FAIR principles of findable, accessible, interoperable, and re-useable are gaining momentum and being adopted by government offices and data collection agencies. In the article, the story of SESRI at Qatar University is described ahead of sections concerning data sharing culture and challenges as well as issues of staff recruitment, architecture and workflow. Many of the observations and considerations in the article will be of value to staff at both older and infant archives. The authors of the paper are the senior researcher and lead archivist at the archive of the Qatar University Brian W. Mandikiana, and Lois Timms-Ferrara and Marc Maynard – CEO and director of technology at Data Independence (Connecticut, USA). Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors 'deep links' into the IQ as well as deposition of the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com. Authors are very welcome to take a look at the instructions and layout: https://www.iassistquarterly.com/index.php/iassist/about/submissions Authors can also contact me directly via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you. Karsten Boye Rasmussen - September 2019
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Flexible Metadata Format"

1

Dubaj, Ondrej. „Systém pro správu výsledků testů doplňující nástroj tmt“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445546.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the area of software testing, more precisely with the topic of managing test results. The aim of this work is to find, set up and implement a system that complements the missing functionality of the TMT tool, which is going to replace the Nitrate tool in Red Hat as a test management system. The content of this work is a basic introduction to the tools Nitrate, TMT and other technologies used in Red Hat. Furthermore, the work presents the current state of the test infrastructure and collected user requirements for a new system for managing test results. Subsequently, the ReportPortal tool is introduced as a system for test results management and the missing functionality is defined. The rest of the work is devoted to setting up the system itself and implementing the missing functionality, along with implementing the infrastructure needed to import test results into ReportPortal. The work describes the method of deploying the system in use and feedback from users. The deployed system is evaluated and its further possible improvements are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Flexible Metadata Format"

1

Matsumoto, Toshiko, Mitsuharu Oba und Takashi Onoyama. „Sample-Based Collection and Adjustment Algorithm for Metadata Extraction Parameter of Flexible Format Document“. In Artifical Intelligence and Soft Computing, 566–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13232-2_69.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mattmann, Chris A., Andrew Hart, Luca Cinquini, Joseph Lazio, Shakeh Khudikyan, Dayton Jones, Robert Preston et al. „Scalable Data Mining, Archiving, and Big Data Management for the Next Generation Astronomical Telescopes“. In Big Data, 2199–225. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9840-6.ch100.

Der volle Inhalt der Quelle
Annotation:
Big data as a paradigm focuses on data volume, velocity, and on the number and complexity of various data formats and metadata, a set of information that describes other data types. This is nowhere better seen than in the development of the software to support next generation astronomical instruments including the MeerKAT/KAT-7 Square Kilometre Array (SKA) precursor in South Africa, in the Low Frequency Array (LOFAR) in Europe, in two instruments led in part by the U.S. National Radio Astronomy Observatory (NRAO) with its Expanded Very Large Array (EVLA) in Socorro, NM, and Atacama Large Millimeter Array (ALMA) in Chile, and in other instruments such as the Large Synoptic Survey Telescope (LSST) to be built in northern Chile. This chapter highlights the big data challenges in constructing data management systems for these astronomical instruments, specifically the challenge of integrating legacy science codes, handling data movement and triage, building flexible science data portals and user interfaces, allowing for flexible technology deployment scenarios, and in automatically and rapidly mitigating the difference in science data formats and metadata models. The authors discuss these challenges and then suggest open source solutions to them based on software from the Apache Software Foundation including Apache Object-Oriented Data Technology (OODT), Tika, and Solr. The authors have leveraged these solutions to effectively and expeditiously build many precursor and operational software systems to handle data from these astronomical instruments and to prepare for the coming data deluge from those not constructed yet. Their solutions are not specific to the astronomical domain and they are already applicable to a number of science domains including Earth, planetary, and biomedicine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie