Academic literature on the topic 'IE. Data and metadata structures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'IE. Data and metadata structures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "IE. Data and metadata structures"

1

Fong, Joseph, Qing Li, and Shi-Ming Huang. "Universal Data Warehousing Based on a Meta-Data Modeling Approach." International Journal of Cooperative Information Systems 12, no. 03 (September 2003): 325–63. http://dx.doi.org/10.1142/s0218843003000772.

Full text
Abstract:
Data warehouse contains vast amount of data to support complex queries of various Decision Support Systems (DSSs). It needs to store materialized views of data, which must be available consistently and instantaneously. Using a frame metadata model, this paper presents an architecture of a universal data warehousing with different data models. The frame metadata model represents the metadata of a data warehouse, which structures an application domain into classes, and integrates schemas of heterogeneous databases by capturing their semantics. A star schema is derived from user requirements based on the integrated schema, catalogued in the metadata, which stores the schema of relational database (RDB) and object-oriented database (OODB). Data materialization between RDB and OODB is achieved by unloading source database into sequential file and reloading into target database, through which an object relational view can be defined so as to allow the users to obtain the same warehouse view in different data models simultaneously. We describe our procedures of building the relational view of star schema by multidimensional SQL query, and the object oriented view of the data warehouse by Online Analytical Processing (OLAP) through method call, derived from the integrated schema. To validate our work, an application prototype system has been developed in a product sales data warehousing domain based on this approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Et.al, Nur Adila Azram. "Laboratory Instruments’ Produced Scientific Data Standardization through the Use of Metadata." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 2146–51. http://dx.doi.org/10.17762/turcomat.v12i3.1157.

Full text
Abstract:
The progression of scientific data from various laboratory instruments is increasing these days. As different laboratory instruments hold different structures and formats of data, it became a concern in the management and analysis of data because of the heterogeneity of data structure and format. This paper offered a metadata structure to standardize the laboratory instruments' -produced scientific data to attain a standard structure and format. This paper contains explanation regarding the methodology and the use of proposed metadata structure, before summarizing the implementation and its related result analysis. The proposed metadata structure extraction shows promising results based on conducted evaluation and validation.
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Jian, Jeff Hemsley, and Sarah E. Bratt. "The structural shift and collaboration capacity in GenBank Networks: A longitudinal study." Quantitative Science Studies 3, no. 1 (2022): 174–93. http://dx.doi.org/10.1162/qss_a_00181.

Full text
Abstract:
Abstract Metadata in scientific data repositories such as GenBank contain links between data submissions and related publications. As a new data source for studying collaboration networks, metadata in data repositories compensate for the limitations of publication-based research on collaboration networks. This paper reports the findings from a GenBank metadata analytics project. We used network science methods to uncover the structures and dynamics of GenBank collaboration networks from 1992–2018. The longitudinality and large scale of this data collection allowed us to unravel the evolution history of collaboration networks and identify the trend of flattening network structures over time and optimal assortative mixing range for enhancing collaboration capacity. By incorporating metadata from the data production stage with the publication stage, we uncovered new characteristics of collaboration networks as well as developed new metrics for assessing the effectiveness of enablers of collaboration—scientific and technical human capital, cyberinfrastructure, and science policy.
APA, Harvard, Vancouver, ISO, and other styles
4

Vanags, Mikus, and Rudite Cevere. "Type Safe Metadata Combining." Computer and Information Science 10, no. 2 (April 30, 2017): 97. http://dx.doi.org/10.5539/cis.v10n2p97.

Full text
Abstract:
Type safety is an important property of any type system. Modern programming languages support different mechanisms to work in type safe manner, e.g., properties, methods, events, attributes (annotations) and other structures. Some programming languages allow access to metadata: type information, type member information and information about applied attributes. But none of the existing mainstream programming languages which support reflection provides fully type safe metadata combining mechanism built in the programming language. Combining of metadata means a class member metadata combining with data, type metadata and constraints. Existing solutions provide no or limited type safe metadata combining mechanism; they are complex and processed at runtime, which by definition is not built-in type-safe metadata combining. Problem can be solved by introducing syntax and methods for type safe metadata combining so that, metadata could be processed at compile time in a fully type safe way. Common metadata combining use cases are data abstraction layer creation and database querying.
APA, Harvard, Vancouver, ISO, and other styles
5

Foessel, Siegfried, and Heiko Sparenberg. "EN 17650 – The new standard for digital preservation of cinematographic works." Archiving Conference 2021, no. 1 (June 18, 2021): 43–46. http://dx.doi.org/10.2352/issn.2168-3204.2021.1.0.10.

Full text
Abstract:
EN 17650 is a proposed new European Standard for the digital preservation of cinematographic works. It allows organizing of content in a systematic way, the so called Cinema Preservation Package (CPP). The standard defines methods to store content in physical and logical structures and describes relationships and metadata for its components. The CPP uses existing XML schemes, in particular METS, EBUCore and PREMIS to store structural, descriptive, technical and provenance metadata. METS XML files with their core metadata contain physical and logical structures of the content, hash values and UUIDs to ensure data integrity and links to external metadata files to enrich the content with additional information. The content itself is stored based on existing public and industry standards, avoiding unnecessary conversion steps. The paper explains the concepts behind the new standard and specifies the usage and combinations of existing schemes with newly introduced metadata parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Canning, Erin, Susan Brown, Sarah Roger, and Kimberley Martin. "The Power to Structure." KULA: Knowledge Creation, Dissemination, and Preservation Studies 6, no. 3 (July 27, 2022): 1–15. http://dx.doi.org/10.18357/kula.169.

Full text
Abstract:
Information systems are developed by people with intent—they are designed to help creators and users tell specific stories with data. Within information systems, the often invisible structures of metadata profoundly impact the meaning that can be derived from that data. The Linked Infrastructure for Networked Cultural Scholarship project (LINCS) helps humanities researchers tell stories by using linked open data to convert humanities datasets into organized, interconnected, machine-processable resources. LINCS provides context for online cultural materials, interlinks them, andgrounds them in sources to improve web resources for research. This article describes how the LINCS team is using the shared standards of linked data and especially ontologies—typically unseen yet powerful—to bring meaning mindfully to metadata through structure. The LINCS metadata—comprised of linked open data about cultural artifacts, people, and processes—and the structures that support it must represent multiple, diverse ways of knowing. It needs to enable various means of incorporating contextual data and of telling stories with nuance and context, situated and supported by data structures that reflect and make space for specificities and complexities. As it addresses specificity in each research dataset, LINCS is simultaneously working to balance interoperability, as achieved through a level of generalization, with contextual and domain-specific requirements. The LINCS team’s approach to ontology adoption and use centers on intersectionality, multiplicity, and difference. The question of what meaning the structures being used will bring to the data is as important as what meaning is introduced as a result of linking data together, and the project has built this premise into its decision-making and implementation processes. To convey an understanding of categories and classification as contextually embedded—culturally produced, intersecting, and discursive—the LINCS team frames them not as fixed but as grounds for investigation and starting points for understanding. Metadata structures are as important as vocabularies for producing such meaning.
APA, Harvard, Vancouver, ISO, and other styles
7

López-Tello, Eva, and Salvador Mandujano. "PAQUETE camtrapR PARA GESTIONAR DATOS DE FOTO-TRAMPEO: APLICACIÓN EN LA RESERVA DE LA BIOSFERA TEHUACÁN-CUICATLÁN." Revista Mexicana de Mastozoología (Nueva Epoca) 1, no. 2 (December 14, 2017): 13. http://dx.doi.org/10.22201/ie.20074484e.2017.1.2.245.

Full text
Abstract:
ResumenEl empleo de cámaras trampa es un método que se ha popularizado en la última década debido al desarrollo tecnológico que ha hecho más accesible la adquisición de este equipo. Una de las ventajas de este método es que podemos obtener mucha información en poco tiempo de diferentes especies. Sin embargo, existen pocos programas que faciliten la organización y extracción de la información de una gran cantidad de imágenes. Recientemente se ha puesto disponible libremente el paquete R llamado camtrapR, el cual sirve para extraer los metadatos de las imágenes, crear tablas de registros independientes, registros de presencia/ausencia para ocupación, y gráficos espaciales. Para comprobar la funcionalidad del programa en este artículo presentamos seis ejemplos de las principales funciones de camtrapR. Para esto se utilizó un conjunto de imágenes obtenidas con 10 cámaras-trampa en una localidad de la Reserva de la Biosfera Tehuacán-Cuicatlán. camtrapR se aplicó para probar los siguientes objetivos: organización y manejo de las fotos, clasificación por especie, identificación individual, extracción de metadatos por especie y/o individuos, exploración y visualización de datos, y exportación de datos para análisis de ocupación. Está disponible libre el código R utilizado en este trabajo. De acuerdo a los resultados obtenidos se considera que camtrapR es un paquete eficiente para facilitar y reducir el tiempo de extracción de los metadatos de las imágenes; así mismo es posible obtener los registros independientes sin errores de omisión o duplicación de datos. Además, permite crear archivos *.csv que después pueden ser analizados con otros paquetes R o programas para otros propósitos.Palabras clave: base de datos, historias de captura, metadatos, R. AbstractThe camera-trap is a method that has become popular in the last decade due to the technological development that has made the acquisition of this equipment more accessible. One of the advantages of this method is that we can get a lot of information in a short time for different species. However, there are few programs that facilitate the organization and extraction of information from large number of images. Recently, the R package called camtrapR has been made freely available, which serves to extract the metadata from the images, create independent record tables, occupation presence/absence registers and spatial graphics. To check the functionality of this package, in this article we present six examples of how to use the main functions of camtrapR. For this purpose, we used a data set of images obtained with 10 cameras in the location of the Tehuacán-Cuicatlán Biosphere Reserve. camtrapR was applied to test the following objectives: organization and management of the photos, classification by species, individual identification, extraction of metadata by species and individuals, exploration and visualization of data, and export of data for analysis of occupation. The R code used in this work is available freely in line. According to our results, camtrapR is an efficient package to facilitate and reduce the extraction time of the metadata of the images; it is also possible to obtain the independent records without errors of omission or duplication of data. In addition, it allows to create * .csv files that can then be analyzed with other R packages or programs for different objectives.Key words: capture histories, database, metadata, organization, R.
APA, Harvard, Vancouver, ISO, and other styles
8

Hardesty, Juliet L. "Transitioning from XML to RDF: Considerations for an effective move towards Linked Data and the Semantic Web." Information Technology and Libraries 35, no. 1 (April 1, 2016): 51. http://dx.doi.org/10.6017/ital.v35i1.9182.

Full text
Abstract:
Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML) and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF). Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.
APA, Harvard, Vancouver, ISO, and other styles
9

Tilton, Lauren, Emeline Alexander, Luke Malcynsky, and Hanglin Zhou. "The Role of Metadata in American Studies." Polish Journal for American Studies, Issue 14 (Autumn 2020) (December 1, 2020): 149–63. http://dx.doi.org/10.7311/pjas.14/2/2020.02.

Full text
Abstract:
This article argues that metadata can animate rather than stall American Studies inquiry. Data about data can enable and expand the kinds of context, evidence, and interdisciplinary methodological approaches that American Studies can engage with while taking back data from the very power structures that the field aims to reveal, critique, and abolish. As a result, metadata can be a site where the field realizes its intellectual and political commitments. The article draws on a range of digital humanities projects, with a focus on projects created by the authors, that demonstrate the possibilities (and challenges) of metadata for American Studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Russell, Pamela H., and Debashis Ghosh. "Radtools: R utilities for smooth navigation of medical image data." F1000Research 7 (December 24, 2018): 1976. http://dx.doi.org/10.12688/f1000research.17139.1.

Full text
Abstract:
The radiology community has adopted several widely used standards for medical image files, including the popular DICOM (Digital Imaging and Communication in Medicine) and NIfTI (Neuroimaging Informatics Technology Initiative) standards. These file formats include image intensities as well as potentially extensive metadata. The NIfTI standard specifies a particular set of header fields describing the image and minimal information about the scan. DICOM headers can include any of >4,000 available metadata attributes spanning a variety of topics. NIfTI files contain all slices for an image series, while DICOM files capture single slices and image series are typically organized into a directory. Each DICOM file contains metadata for the image series as well as the individual image slice. The programming environment R is popular for data analysis due to its free and open code, active ecosystem of tools and users, and excellent system of contributed packages. Currently, many published radiological image analyses are performed with proprietary software or custom unpublished scripts. However, R is increasing in popularity in this area due to several packages for processing and analysis of image files. While these R packages handle image import and processing, no existing package makes image metadata conveniently accessible. Extracting image metadata, combining across slices, and converting to useful formats can be prohibitively cumbersome, especially for DICOM files. We present radtools, an R package for smooth navigation of medical image data. Radtools makes the problem of extracting image metadata trivially simple, providing simple functions to explore and return information in familiar R data structures. Radtools also facilitates extraction of image data and viewing of image slices. The package is freely available under the MIT license at https://github.com/pamelarussell/radtools and is easily installable from the Comprehensive R Archive Network (https://cran.r-project.org/package=radtools).
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "IE. Data and metadata structures"

1

Lima, João Alberto de Oliveira. "Modelo Genérico de Relacionamentos na Organização da Informação Legislativa e Jurídica." Thesis, reponame:Repositório Institucional da UnB, 2008. http://eprints.rclis.org/11352/1/tese_Joao_Lima_FINAL.pdf.

Full text
Abstract:
In most of the time information does not work in an isolate form and it always belongs to one context, making relationships with other entities. Legislative and legal information, in a certain way, is characterized by their high degree of relationships. Laws, bills, legal cases and doctrine are connected by several forms, creating a rich network of information. Efforts done for the organization of information generate artificial models that try to represent the real world, creating systems and schemes of concepts used in the classification processes and indexing of information resources. This research had the main objective of proposing a Generic Model of Relationship (GMR), based in a simple constructs which permitted the establishment of relationships between concepts and information units. In the conception of GMR were used Ingetraut Dahlberg’s Theory of Concept and the models CIDOC CRM (ISO 21.117:2006), FRBROO and Topic Maps (ISO 13.250:1999). The identification of relationship and the characteristics of information units in a legal domain were collected in the project "Coletânea Brasileira de Normas e Julgados de Telecomunicações", using the methodology of Action Research. Besides the development of GMR and its application in the legislative and legal information domains, the research also contributed with the definitions of one identification system of documents versions and a new meaning for the term “information unit”.
APA, Harvard, Vancouver, ISO, and other styles
2

Flamino, Adriana Nascimento. "MARCXML: um padrão de descrição para recursos informacionais em Open Archives." Thesis, Marília : [s.n], 2006. http://eprints.rclis.org/16623/1/FLAMINO_AN_DISSERTACAO.pdf.

Full text
Abstract:
The scientific communication is suffering considerable alterations so much in its process as in its structure and philosophy. The open archives and open access initiatives are contributing significantly for the undoing of the traditional model of scientific communication and for the construction of a new disaggregated model and with interoperability, fairer and efficient to disseminate the research results and like this, the knowledge generated by the scientific communities. However, due to the progresses of the information and communication technologies, not only the structure and the flow of the scientific communication is suffering considerable alterations, as well as the own concept and support of the scientific documents. This has been generating the need of the development of tools to optimize the organization, description, exchange and information retrieval processes, besides the digital preservation, among others. Highlight that the MARC format it has been allowing per decades the description and the exchange of bibliographical and cataloging registrations to the institutions, favoring the access to the contents informacionais contained in several collections. However, with the exponential growth of information and of the documents generation (above all digital), this has been demanding larger flexibility and interoperability among the several information systems available. In this scenery, the XML markup language is presented as one of the current developments that has as purpose to facilitate and to optimize the administration, storage and transmission of contents through Internet, it being incorporate for several sections and areas of the knowledge for the handling easiness and operational flexibility. Front to that, an exploratory study of theoretical analysis was accomplished, identifying the adaptation of the MARCXML format in the construction in ways of descriptive representation for information resources in open archives, as a complex and flexible standard of metadata, that will make possible the interoperability among information systems heterogeneous, besides the access to the information. As result of this research, It's considered that MARCXML is an appropriate format for description of data in a complex structure. It’s ended that the measure that increases the complexity of the documents in the repositories and open archives, plus it’s justified a structure of metadata, as the MARCXML format, that support the description of the pecificities of the informational resources, once this initiative is not and nor it will be if restricting to scientific documents, but expanding the other types of informational resources more and more complex and specific, also demanding an appropriate description for the specificities of the bibliographical entities.
APA, Harvard, Vancouver, ISO, and other styles
3

Zurek, Fiona. "Metadatenmanagement in Bibliotheken mit KNIME und Catmandu." Thesis, 2019. http://eprints.rclis.org/39887/1/Bachelorarbeit_Metadatenmanagement_KNIME_Catmandu.pdf.

Full text
Abstract:
This thesis deals with metadata management in libraries. It examines to what extent the tools KNIME and Catmandu can be used to support libraries in typical tasks of metadata management. The technical developments in the field of metadata have become more complex due to the multitude of formats, interfaces, and applications. In order to prepare and use metadata, information about the suitability of different programs is needed. KNIME and Catmandu are both theoretically analyzed and practically tested. For this purpose it is examined, among other things, how the documentation is designed, and which data formats and interfaces are supported. Typical tasks like filtering, analysis, content enhancement, and data enrichment will be tested. The work shows that both tools have different strengths and weaknesses. Catmandu's strength is an easier introduction into the program and a variety of options for using library data formats and interfaces. An advantage of KNIME is that after an initial familiarization many problems can be solved quickly and special features are made available for numerous cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Ballarin, Matteo. "SKOS : un sistema per l'organizzazione della conoscenza." Thesis, 2006. http://eprints.rclis.org/7408/1/774752.pdf.

Full text
Abstract:
The development of Semantic Web includes not only new technologies like web service, search engines, ontologies but also different worlds like librarianship and other disciplines that for hundred of years have been working with knowledge organization (KOS). This thesis focuses the attention on this type of tools: the rapid growth of means and digital contents on the Web has increased the needs to organize the immense size of information. Tools for knowledge organization like thesauri, taxonomies, concept schemas are fundamental for the birth and the development of the Semantic Web. In this thesis it comes taken in consideration and analyzed one emergent technology candidate to become soon standard recommended from the W3C: SKOS. The standards are taken in consideration that discipline the construction of thesauri, existing technologies are analyzed and some simple examples are modeled. Moreover other alternative instruments are analyzed for the knowledge organisation system, and some real applications based on the SKOS framework are introduced. Supervisor of the degree thesis: Prof. Renzo Orsini.
APA, Harvard, Vancouver, ISO, and other styles
5

Palomino, Norma. "The Bibliographic Concept of Work in Cataloguing and its Issues." Thesis, 2003. http://eprints.rclis.org/9531/1/TesisNorma_Palomino.pdf.

Full text
Abstract:
This report explores the IFLA’s document Functional Requirements for Bibliographic Records (FRBR). It discusses the notion of work in cataloguing as it was built since the 1950s, inasmuch this notion constitutes the conceptual framework for the proposal. Also, the entity-relationship database modeling (ERDM) system is described as far as such model provides to FRBR the operative elements that make it functional. ERDM gives to FRBR a user-centered approach as well. In its third chapter, the report tests the FRBR model through its application to a set of items belonging to the novel Rayuela, by Julio Cortázar, held at the Benson Latin American Collection of the University of Texas at Austin. Finally, some critical issues are raised along with general conclusions regarding the functionality of the model
APA, Harvard, Vancouver, ISO, and other styles
6

Πεπονάκης, Μανόλης. "Σύνθεση FRBR εγγραφών αξιοποιώντας υπάρχουσες βιβλιογραφικές εγγραφές (FRBRization): ομαδοποίηση σχετικών εγγραφών (clustering) και εμφάνισή τους σε on line συστήματα." Thesis, 2010. http://eprints.rclis.org/16674/1/Peponakis_FRBRization_MSc_Thesis.pdf.

Full text
Abstract:
This MSc dissertation thesis focuses upon two main issues. First issue is to review the international practices on FRBRization with special interest on clustering procedures. Second issue is to study the effectiveness of these procedures on Greek library catalogs. The study begins with a short history review of library catalogs aiming to designate the reasons which led to conceptual models such as the FRBR. A brief analysis of the FRBR follows emphasizing on the changes they bring on the methods with which library catalogs are structured along with other changes. In Chapter 3, the study attempts a rather holistic approach on the international FRBRization practices in addition to clustering procedures. The fourth chapter of the study discusses the implementation of the international clustering procedures on Greek metadata. To fulfil this part of the research it was considered necessary to elaborate on Greek catalogs’ existing metadata. This task helps mainly on reporting the problems which seem to affect the clustering procedure overall. The chapter continues with recommendations for modifications and adjustments of the existing international clustering techniques in order to match special needs and features of Greek catalog structures. This will lead to delivering better FRBRization results of Greek metadata. The fifth chapter concludes the study and is divided in two main sections. The first section presents the problems and objections regarding the FRBRization and the ways it is currently put into practice. The second section reviews briefly the hands-on effort of implementing international FRBRization techniques on Greek metadata. Results show that it is a prerequisite for the FRBRization procedure in Greek catalogs to take into consideration their special features. But even then the effectiveness of the FRBRization techniques already reported in the international bibliography is not fully proved with the Greek catalogs.
APA, Harvard, Vancouver, ISO, and other styles
7

Rahman, A. I. M. Jakaria. "Social tagging versus Expert created subject headings." Thesis, 2012. http://eprints.rclis.org/25587/1/Rahman_Social%20tagging%20versus%20Expert%20created%20subject%20headings.pdf.

Full text
Abstract:
The purpose of the study was to investigate social tagging practice in science book context. In addition, it identified the usefulness of social tags as supplementary of controlled vocabulary to enhance the use of library resources. More specifically, this study examined to know to what extent the social tags match with controlled vocabulary, and whether or not any additional perception is provided by social tags to improve the accessibility and information retrieval in a digital environment. In both cases, the social tags were considered with respect to the appropriateness to the specific book. For the successful implementation of social tagging in library systems, there is a need to understand how users assign social tags to library collections, what vocabularies they use and how far the social tag relates to controlled vocabulary. This understanding can help libraries to decide on how to implement and review the social tagging. This study used a combination of both qualitative and quantitative research approaches. The LibraryThing website and Library of Congress Subject Headings were considered as a research site. Social tags have been collected from the LibraryThing website and LCSHs has been considered as controlled vocabulary. Twenty books from the science genre have been chosen purposefully. The sample has further been considered to include only those books that have also been available in the Library of Congress catalogue. Ten books have been taken from the academic group and the remaining were from the non-academic group. This study took into consideration only those social tags that occurred at least twice. A coding system has been developed to pull together all the similar social tags for further analysis. In the coding system, four broad categories have been defined, e.g., Social tags that match exactly with LCSHs, Social tags that match partially with LCSHs, Social tags that reflect bibliographic information and social tags that are user specific information. The last three categories were further sub-categorized. It is found that there is a clear difference between assigning expert created subject terms and social tagging practice to library books. Cataloguers assigned relatively few terms per book through the use of restricted and established vocabulary following firm rules, whereas, the end users enjoyed liberty with unlimited terms. More than fifty percent of social tags matched with expert created subject headings. The frequency of use of the social tags that matched with LCSHs terms was higher than the non-matched ones. The expert created subject headings were highly ranked in the social tags' lists, where end users more frequently assigned social tags that represented broader or narrower terms than the cataloguers’ assigned subject headings. In addition, the social tagging represented other aspects that could not be either covered within the strict subject headings assigned rules or cataloguing rules. Such diverse impressions can be seen as an access point to the same library collections according to users’ interest and opinions. This study revealed that as a standalone tool neither the controlled vocabulary nor the social tagging practice can work like a satisfactory information retrieval tool. A hybrid catalogue with combining both LCSHs and social tags would give its patrons the best of both worlds in terms of access to materials. This kind of practice may give more significant outcome for local research or university libraries where the users are more concentrated on a defined number of disciplines. Adapting users’ views in addition to controlled vocabulary through social tags may increase the efficiency of information retrieval process in library OPAC. This study implied both qualitative and quantitative support for the use of social tags in the library OPACs. The findings support many of the previous theories proposed in literature about social tagging and LCSHs. The qualitative analysis of social tags disclosed the diverse way of looking at the library resources by the end users in addition to subject descriptors.
APA, Harvard, Vancouver, ISO, and other styles
8

Dobrecky, Leticia Paula. "Plan de apertura de datos de la Biblioteca del Ministerio de Agricultura, Ganadería y Pesca de la Argentina." Thesis, 2018. http://eprints.rclis.org/42739/1/TFM_LDobrecky.pdf.

Full text
Abstract:
The Ministry of Agriculture, Livestock and Fisheries of Argentina is part of the Open Government initiatives since 2016 when the administration issued the Decree 117 “Data Opening Plan” that instructs the Executive Branch to create an Open Data Plan so that its dependencies share their datasets through the national portal. In this context, the ministry’s library can join this trend identifying and publishing its own datasets that reflect the library activities and services. This approach can serve as an inspirational model for other government libraries. These actions might trigger more and better engagement with the community, increased visibility, and improved integration in this challenging scenario.
APA, Harvard, Vancouver, ISO, and other styles
9

Angelozzi, Silvina Marcela, and Sandra Gisela Martín. "Análisis y comparación de metadatos para la descripción de recursos electrónicos en línea." Thesis, 2009. http://eprints.rclis.org/43834/1/Metadatos%202009.pdf.

Full text
Abstract:
This work consists of a comparative study of metadata for the description of documents accessible on the Internet. The particularities and difficulties of online electronic resources in terms of their cataloging are analyzed. The metadata are defined and characterized, the different schemes are described and analyzed. The comparison is made taking into account characteristics such as origin and purpose, structure and content of the description, complexity, syntax, contribution to international standardization, interoperability, extensibility, flexibility, maintenance, existing documentation, updating, and results obtained to date.
APA, Harvard, Vancouver, ISO, and other styles
10

Voss, Jakob. "Begriffssysteme - Ein Vergleich verschiedener Arten von Begriffssystemen und Entwurf des integrierenden Thema-Datenmodells." Thesis, 2003. http://eprints.rclis.org/8308/1/begriffssysteme.pdf.

Full text
Abstract:
Concept Schemes like thesauri, classifications, reference works, concept maps, ontologies etc. are used in different disciplines to organize and represent knowledge. This work (result of a students research project) gives an overview of different kinds of concept schemes and their structure. Some data format are explained and summarized in a new data model for concept schemes in XML.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "IE. Data and metadata structures"

1

Chiarcos, Christian, and Sebastian Hellmann. Linked data in linguistics: Representing and connecting language data and language metadata. Edited by Nordoff Sebastian. Heidelberg: Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sebastian, Nordhoff, Hellmann Sebastian, and SpringerLink (Online service), eds. Linked Data in Linguistics: Representing and Connecting Language Data and Language Metadata. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Data model patterns: Conventions of thought. New York: Dorset House Pub., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1961-, Shane Darrell, Ram Prasad, and United States. Defense Advanced Research Projects Agency., eds. IID: An intelligent information dictionary for managing semantic metadata. Santa Monica, CA: Rand, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

missing], [name. Fundamentals of data warehouses. 2nd ed. Berlin: Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ontology Learning for the Semantic Web. Boston, MA: Springer US, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Michael, Popham, Wikander Karen, Oxford Text Archive, and Arts and Humanities Data Service., eds. Creating and documenting electronic texts. Oxford [England]: Oxbow Books for the Arts and Humanities Data Service, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chiarcos, Christian, Sebastian Hellmann, and Sebastian Nordhoff. Linked Data in Linguistics: Representing and Connecting Language Data and Language Metadata. Springer Berlin / Heidelberg, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Institute, SAS. SAS 9.1 Metadata LIBNAME Engine User's Guide. SAS, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Publishing, SAS. SAS 9.1.3 Metadata Libname Engine: User's Guide. SAS Institute, Incorporated, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "IE. Data and metadata structures"

1

Khalid, Hiba, Esteban Zimanyi, and Robert Wrembel. "Metadata Reconciliation for Improved Data Binding and Integration." In Beyond Databases, Architectures and Structures. Facing the Challenges of Data Proliferation and Growing Variety, 271–82. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99987-6_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leite Cavalcanti, Welchy, Elli Moutsompegka, Konstantinos Tserpes, Paweł H. Malinowski, Wiesław M. Ostachowicz, Romain Ecault, Neele Grundmann, et al. "Integrating Extended Non-destructive Testing in the Life Cycle Management of Bonded Products—Some Perspectives." In Adhesive Bonding of Aircraft Composite Structures, 331–50. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-92810-4_6.

Full text
Abstract:
AbstractIn this chapter, we outline some perspectives on embracing the datasets gathered using Extended Non-destructive Testing (ENDT) during manufacturing or repair process steps within the life cycle of bonded products. Ensuring that the ENDT data and metadata are FAIR, i.e. findable, accessible, interoperable and re-usable, will support the relevant stakeholders in exploiting the contained material-related information far beyond a stop/go decision, while a shorter time-to-information will facilitate a prompter time-to-decision in process and product management. Exploiting the value of ENDT (meta)data will contribute to increased performance by integrating all defined, measured, analyzed and controlled aspects of material transformation across process and company boundaries. This will facilitate the optimization of manufacturing and repair operations, boosting their energy efficiency and productivity. In this regard, some aspects that are currently driving activities in the field of pre-process, in-process and post-process quality assessment will be addressed in the following. Furthermore, some requirements will be contemplated for harmonized and conjoint data transfer ranging from a bonded product’s beginning-of-life through its end-of-life, the customization of stand-alone or linked ENDT tools, and the implementation of sensor arrays and networks in joints, devices and structural parts to gather material-related data during a product’s middle-of-life application phase, thereby fostering structural health monitoring (SHM).
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Yan, Huiping Cao, K. Selçuk Candan, and Maria Luisa Sapino. "XML Data Integration." In Advanced Applications and Structures in XML Processing, 333–60. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-727-5.ch015.

Full text
Abstract:
In XML Data Integration, data/metadata merging and query processing are indispensable. Specifically, merging integrates multiple disparate (heterogeneous and autonomous) input data sources together for further usage, while query processing is one main reason why the data need to be integrated in the first place. Besides, when supported with appropriate user feedback techniques, queries can also provide contexts in which conflicts among the input sources can be interpreted and resolved. The flexibility of XML structure provides opportunities for alleviating some of the difficulties that other less flexible data types face in the presence of uncertainty; yet, this flexibility also introduces new challenges in merging multiple sources and query processing over integrated data. In this chapter, the authors discuss two alternative ways XML data/schema can be integrated: conflict-eliminating (where the result is cleaned from any conflicts that the different sources might have with each other) and conflict-preserving (where the resulting XML data or XML schema captures the alternative interpretations of the data). They also present techniques for query processing over integrated, possibly imprecise, XML data, and cover strategies that can be used for resolving underlying conflicts.
APA, Harvard, Vancouver, ISO, and other styles
4

Aldeias, Carlos, Gabriel David, and Cristina Ribeiro. "Preservation of Data Warehouses." In Innovations in XML Applications and Metadata Management, 136–59. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2669-0.ch008.

Full text
Abstract:
Data warehouses are used in many application domains, and there is no established method for their preservation. A data warehouse can be implemented in multidimensional structures or in relational databases that represent the dimensional model concepts in the relational model. The focus of this work is on describing the dimensional model of a data warehouse and migrating it to an XML model, in order to achieve a long-term preservation format. This chapter presents the definition of the XML structure that extends the SIARD format used for the description and archive of relational databases, enriching it with a layer of metadata for the data warehouse components. Data Warehouse Extensible Markup Language (DWXML) is the XML language proposed to describe the data warehouse. An application that combines the SIARD format and the DWXML metadata layer supports the XML language and helps to acquire the relevant metadata for the warehouse and to build the archival format.
APA, Harvard, Vancouver, ISO, and other styles
5

Rabinowitz, Adam. "Metadata for the Masses." In Digital Heritage and Archaeology in Practice, 61–84. University Press of Florida, 2022. http://dx.doi.org/10.5744/florida/9780813069319.003.0004.

Full text
Abstract:
This chapter seeks to demystify metadata for archaeologists who are responsible for producing and managing digital documentation, but who are not themselves trained in library and information science. It provides a general overview of the role and importance of metadata in contextualizing digital documents and ensuring possibilities for future reuse. Following this overview, common metadata standards such as the Dublin Core are introduced, along with some simple, relatively easy strategies to incorporate metadata into data structures for digital archaeological documentation. After a discussion of Linked Data principles and approaches, the chapter concludes with a brief explanation of ontologies, semantic representations of data, and serialization. Archaeologists producing or using digital data are encouraged to familiarize themselves with these concepts to enhance the discipline’s capacity to produce discoverable, well-described, and reusable digital records.
APA, Harvard, Vancouver, ISO, and other styles
6

Schwalbach, Jan, and Christian Rauh. "Collecting Large-scale Comparative Text Data on Legislative Debates." In The Politics of Legislative Debates, 91–109. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198849063.003.0006.

Full text
Abstract:
Parliamentary speeches present one of the most consistently available sources of information about the political priorities, actor positions, and conflict structures in democratic states. Recent advances of automated text analysis offer more and more tools to tap into this information reservoir in a systematic manner. However, collecting the high-quality text data needed for unleashing the comparative potential of the various text analysis algorithms out there is a costly endeavor and faces various pragmatic hurdles. Against this challenge, this chapter offers three contributions. First, we outline best practice guidelines and useful tools for researchers wishing to collect or to extend existing legislative debate corpora. Second, we present an extended version of the ParlSpeech Corpus. Third, we highlight the difficulties of comparing text-as-data outputs across different parliaments, pointing to varying languages, varying traditions and conventions, and varying metadata availability.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Hak-Lae, John G. Breslin, Stefan Decker, and Hong-Gee Kim. "Representing and Sharing Tagging Data Using the Social Semantic Cloud of Tags." In Social Computing, 1788–96. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-984-7.ch117.

Full text
Abstract:
Social tagging has become an essential element for Web 2.0 and the emerging Semantic Web applications. With the rise of Web 2.0, websites that provide content creation and sharing features have become extremely popular. These sites allow users to categorize and browse content using tags (i.e., free-text keyword topics). However, the tagging structures or folksonomies created by users and communities are often interlocked with a particular site and cannot be reused in a different system or by a different client. This chapter presents a model for expressing the structure, features, and relations among tags in different Web 2.0 sites. The model, termed the social semantic cloud of tags (SCOT), allows for the exchange of semantic tag metadata and reuse of tags in various social software applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Daradkeh, Mohammad Kamel. "Enterprise Data Lake Management in Business Intelligence and Analytics." In Advances in Business Information Systems and Analytics, 92–113. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5781-5.ch005.

Full text
Abstract:
The data lake has recently emerged as a scalable architecture for storing, integrating, and analyzing massive data volumes characterized by diverse data types, structures, and sources. While the data lake plays a key role in unifying business intelligence, analytics, and data mining in an enterprise, effective implementation of an enterprise-wide data lake for business intelligence and analytics integration is associated with a variety of practical challenges. In this chapter, concrete analytics projects of a globally industrial enterprise are used to identify existing practical challenges and drive requirements for enterprise data lakes. These requirements are compared with the extant literature on data lake technologies and management to identify research gaps in analytics practice. The comparison shows that there are five major research gaps: 1) unclear data modelling methods, 2) missing data lake reference architecture, 3) incomplete metadata management strategy, 4) incomplete data lake governance strategy, and 5) missing holistic implementation and integration strategy.
APA, Harvard, Vancouver, ISO, and other styles
9

Löbe, Matthias, Hannes Ulrich, Christoph Beger, Theresa Bender, Christian Bauer, Ulrich Sax, Josef Ingenerf, and Alfred Winter. "Improving Findability of Digital Assets in Research Data Repositories Using the W3C DCAT Vocabulary." In MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation. IOS Press, 2022. http://dx.doi.org/10.3233/shti220032.

Full text
Abstract:
Research data management requires stable, trustworthy repositories to safeguard scientific research results. In this context, rich markup with metadata is crucial for the discoverability and interpretability of the relevant resources. SEEK is a web-based software to manage all important artifacts of a research project, including project structures, involved actors, documents and datasets. SEEK is organized along the ISA model (Investigation – Study – Assay). It offers several machine-readable serializations, including JSON and RDF. In this paper, we extend the power of RDF serialization by leveraging the W3C Data Catalog Vocabulary (DCAT). DCAT was specifically designed to improve interoperability between digital assets on the Web and enables cross-domain markup. By using community-consented gold standard vocabularies and a formal knowledge description language, findability and interoperability according to the FAIR principles are significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Simitsis, Alkis, Panos Vassiliadis, and Timos Sellis. "Extraction-Transformation-Loading Processes." In Encyclopedia of Database Technologies and Applications, 240–45. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-560-3.ch041.

Full text
Abstract:
A data warehouse (DW) is a collection of technologies aimed at enabling the knowledge worker (executive, manager, analyst, etc.) to make better and faster decisions. The architecture of a DW exhibits various layers of data in which data from one layer are derived from data of the lower layer (see Figure 1). The operational databases, also called data sources, form the starting layer. They may consist of structured data stored in open database and legacy systems, or even in files. The central layer of the architecture is the global DW. The global DW keeps a historical record of data that result from the transformation, integration, and aggregation of detailed data found in the data sources. An auxiliary area of volatile data, data staging area (DSA) is employed for the purpose of data transformation, reconciliation, and cleaning. The next layer of data involves client warehouses, which contain highly aggregated data, directly derived from the global warehouse. There are various kinds of local warehouses, such as data mart or on-line analytical processing (OLAP) databases, which may use relational database systems or specific multidimensional data structures. The whole environment is described in terms of its components, metadata, and processes in a central metadata repository, located at the DW site.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "IE. Data and metadata structures"

1

Mayernik, Matthew S. "Institutional structures for research data and metadata curation." In the 13th ACM/IEEE-CS joint conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2467696.2467755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Allen, Robert B., and John Schalow. "Metadata and data structures for the historical newspaper digital library." In the eighth international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/319950.319971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Klymash, Mykhailo, Ivan Demydov, Mykola Beshley, and Orest Kostiv. "Structures Assessment of Data-Centers’ Telecommunication Systems for Metadata Fixation." In 2018 International Conference on Information and Telecommunication Technologies and Radio Electronics (UkrMiCo). IEEE, 2018. http://dx.doi.org/10.1109/ukrmico43733.2018.9047612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wong, John-Michael, and Bozidar Stojadinovic. "Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data." In Smart Structures and Materials, edited by Masayoshi Tomizuka. SPIE, 2005. http://dx.doi.org/10.1117/12.599803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Labban, Ramzi Roy, Stephen Hague, Elyar Pourrahimian, and Simaan AbouRizk. "Dynamic, Data-Driven Simulation In Construction Using Advanced Metadata Structures and Bayesian Inference." In 2021 Winter Simulation Conference (WSC). IEEE, 2021. http://dx.doi.org/10.1109/wsc52266.2021.9715346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

HOXHA, EJUP, JINGLUN FENG, DIAR SANAKOV, ARDIAN GJINOFCI, and JIZHONG XIAO. "ROBOTIC INSPECTION AND CHARACTERIZATION OF SUBSURFACE DEFECTS ON CONCRETE STRUCTURES USING IMPACT SOUNDING." In Structural Health Monitoring 2021. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/shm2021/36339.

Full text
Abstract:
Impact-sounding (IS) and impact-echo (IE) are well-developed non-destructive evaluation (NDE) methods that are widely used for inspections of concrete structures to ensure the safety and sustainability. However, it is a tedious work to collect IS and IE data along grid lines covering a large target area for characterization of subsurface defects. On the other hand, data processing is very complicated that requires domain experts to interpret the results. To address the above problems, we present a novel robotic inspection system named as Impact-Rover to automate the data collection process and introduce data analytics software to visualize the inspection result allowing regular non-professional people to understand. The system consists of three modules: 1) a robotic platform with vertical mobility to collect IS and IE data in hard-to-reach locations, 2) vision-based positioning module that fuses the RGB-D camera, IMU and wheel encoder to estimate the 6-DOF pose of the robot, 3) a data analytics software module for processing the IS data to generate defect maps. The Impact-Rover hosts both IE and IS devices on a sliding mechanism and can perform move-stop-sample operations to collect multiple IS and IE data at adjustable spacing. The robot takes samples much faster than the manual data collection method because it automatically takes the multiple measurements along a straight line and records the locations. This paper focuses on reporting experimental results on IS. We calculate features and use unsupervised learning methods for analyzing the data. By combining the pose generated by our vision-based localization module and the position of the head of the sliding mechanism we can generate maps of possible defects. The results on concrete slabs demonstrate that our impact-sounding system can effectively reveal shallow defects.
APA, Harvard, Vancouver, ISO, and other styles
7

Mason, Robert. "Interoperability Gap Challenges for Learning Object Repositories & Learning Management Systems." In InSITE 2007: Informing Science + IT Education Conference. Informing Science Institute, 2007. http://dx.doi.org/10.28945/3079.

Full text
Abstract:
An interoperability gap exists between Learning Management Systems (LMS) and Learning Ob ject Repositories (LOR). LORs are responsible for the storage and management of Learning Objects and the associated Learning Object Metadata (LOM). LOR(s) adhere to various LOM standards depending up the requirements established by user groups and LOR administrators. Two common LOM standards found in LORs are CanCore (Canadian LOM standard) and the Sharable Content Object Reference Model (SCORM) Content Aggregation Model (CAM). In contrast, LMSs are independent computer systems that manage and deliver course content to students via a web interface. This research addresses three important issues related to this problem domain: (a) a lack of metadata standards that define the format of how assessment data should be communicated from Learning Management Systems to Learning Object Repositories, (b) a lack of Information Engineering (IE) architectural standards for the transfer of data from Learning Management Systems to Learning Object Repositories, and (c) a lack of middleware that facilitates the movement of the assessment data from the Learning Management Systems to Learning Object Repositories. Thus, the three goals of this research are: (a) make recommendations for extending the CanCore and SCORM CAM LOM standards to facilitate the storage of assessment and summary assessment data, (b) define the foundation for an IE architectural standard based on an Access Control Policy (ACP) and Data Validation Policy (DVP) using a reliable consensus of experts with the Delphi technique, and (c) develop a middleware prototype that transfers learning assessment data from multiple Learning Management Systems into the Learning Object Metadata of Learning Objects that are stored within a CanCore or SCORM compliant Learning Object Repository.
APA, Harvard, Vancouver, ISO, and other styles
8

BRENNAN, DANIEL S., JULIAN GOSLIGA, ELIZABETH J. CROSS, and KEITH WORDEN. "ON IMPLEMENTING AN IRREDUCIBLE ELEMENT MODEL SCHEMA FOR POPULATION-BASED STRUCTURAL HEALTH MONITORING." In Structural Health Monitoring 2021. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/shm2021/36342.

Full text
Abstract:
This paper is the second in a series in which the aim is to provide an underlying database technology for enabling the user interaction required for Population-Based Structural Health Monitoring (PBSHM). In the first paper in the series, the groundwork was laid for a PBSHM Schema which enabled the storage of channel data via a Time First approach. PBSHM considers grouping similar structures together to gain additional insights from the group, compared to a single entity. Part of the PBSHM process is being able to identify which structures, or substructures, are similar. To enable this a standardised method of representing each structure must be used; here, an Irreducible Element (IE) model is employed. This paper builds on the groundwork that has been laid in the creation of IE models and defines a standardised format and properties for an IE modal to enable graph matching algorithms to find similar structures. The standardised format has been implemented via an IE-model Schema within the PBSHM Schema.
APA, Harvard, Vancouver, ISO, and other styles
9

Novák, Václav, Jaroslav Koutský, Rudolf Kubaš, and Šárka Palcrová. "Ekonomická výkonnost zpracovatelského průmyslu v severočeských mikroregionech v kontextu reindustrializace." In XXIV. mezinárodního kolokvia o regionálních vědách. Brno: Masaryk University Press, 2021. http://dx.doi.org/10.5817/cz.muni.p210-9896-2021-18.

Full text
Abstract:
The paper focuses on micro-regional structures in the Northern Bohemia, for which the tradition of industrial production is typical. In the case of the studied Děčín and Česká Lípa regions, in the past it was mainly a light processing industry. The micro-regions were defined on the basis of daily commuting data. Firm accounting data available in publicly available financial statements of companies were used to evaluate economic performance. From the relative indicators, value added labor productivity and the average monthly wage were used for the analysis. Surprisingly, an average high economic performance of the manufacturing industry was found in the monitored geographical structures. However, relatively low in the strongest industry, ie. in the automotive industry, which contributed most to the reindustrialisation of the Česká Lípa region. Ie. that foreign investment did not necessarily play a comprehensively positive role here. The typical textile industry in the Děčín region has practically completely disappeared and the whole region shows significant deindustrialisation tendencies.
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Simone, and Dieter Kranzlmüller. "Dynamic Sensor Matching for Parallel Point Cloud Data Acquisition." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.3.

Full text
Abstract:
Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography