Artykuły w czasopismach na temat „Data Format”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Data Format.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Data Format”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

John Doyle, D. "Portable data format". Canadian Journal of Anesthesia/Journal canadien d'anesthésie 47, nr 5 (maj 2000): 475–76. http://dx.doi.org/10.1007/bf03018984.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gröhl, Janek, Lina Hacker, Ben T. Cox, Kris K. Dreher, Stefan Morscher, Avotra Rakotondrainibe, François Varray, Lawrence C. M. Yip, William C. Vogt i Sarah E. Bohndiek. "The IPASC data format: A consensus data format for photoacoustic imaging". Photoacoustics 26 (czerwiec 2022): 100339. http://dx.doi.org/10.1016/j.pacs.2022.100339.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Singh, Shashi Pal, Ajai Kumar, Rachna Awasthi, Neetu Yadav i Shikha Jain. "Intelligent Bilingual Data Extraction and Rebuilding Using Data Mining for Big Data". Journal of Computational and Theoretical Nanoscience 17, nr 1 (1.01.2020): 513–18. http://dx.doi.org/10.1166/jctn.2020.8699.

Pełny tekst źródła
Streszczenie:
In today’s World there exists various source of data in various formats (file formats), different structure, different types and etc. which is a hug collection of unstructured over the internet or social media. This gives rise to categorization of data as unstructured, semi structured and structured data. Data that exist in irregular manner without any particular schema are referred as unstructured data which is very difficult to process as it consists of irregularities and ambiguities. So, we are focused on Intelligent Processing Unit which converts unstructured big data into intelligent meaningful information. Intelligent text extraction is a technique that automatically identifies and extracts text from file format. The system consists of different stages which include the pre-processing, keyphase extraction techniques and transformation for the text extraction and retrieve structured data from unstructured data. The system consists multiple method/approach give better result. We are currently working in various file formats and converting the file format into DOCX which will come in the form of the un-structure Form, and then we will obtain that file in the structure form with the help of intelligent Pre-processing. The pre-process stages that triggers the unstructured data/corpus into structured data converting into meaning full. The Initial stage is the system remove the stop word, unwanted symbols noisy data and line spacing. The second stage is Data Extraction from various sources of file or types of files into proper format plain text. The then in third stage we transform the data or information from one format to another for the user to understand the data. The final step is rebuilding the file in its original format maintaining tag of the files. The large size files are divided into sub small size file to executed the parallel processing algorithms for fast processing of larger files and data. Parallel processing is a very important concept for text extraction and with its help; the big file breaks in a small file and improves the result. Extraction of data is done in Bilingual language, and represent the most relevant information contained in the document. Key-phase extraction is an important problem of data mining, Knowledge retrieval and natural speech processing. Keyword Extraction technique has been used to abstract keywords that exclusively recognize a document. Rebuilding is an important part of this project and we will use the entire concept in that file format and in the last, we need the same format which we have done in that file. This concept is being widely used but not much work of the work has been done in the area of developing many functionalities under one tool, so this makes us feel the requirement of such a tool which can easily and efficiently convert unstructured files into structured one.
Style APA, Harvard, Vancouver, ISO itp.
4

Könnecke, Mark, Frederick A. Akeroyd, Herbert J. Bernstein, Aaron S. Brewster, Stuart I. Campbell, Björn Clausen, Stephen Cottrell i in. "The NeXus data format". Journal of Applied Crystallography 48, nr 1 (30.01.2015): 301–5. http://dx.doi.org/10.1107/s1600576714027575.

Pełny tekst źródła
Streszczenie:
NeXus is an effort by an international group of scientists to define a common data exchange and archival format for neutron, X-ray and muon experiments. NeXus is built on top of the scientific data format HDF5 and adds domain-specific rules for organizing data within HDF5 files, in addition to a dictionary of well defined domain-specific field names. The NeXus data format has two purposes. First, it defines a format that can serve as a container for all relevant data associated with a beamline. This is a very important use case. Second, it defines standards in the form of application definitions for the exchange of data between applications. NeXus provides structures for raw experimental data as well as for processed data.
Style APA, Harvard, Vancouver, ISO itp.
5

Tardy, Randall D., Steve C. Brown, Mo Harmon i Richard W. Bradshaw. "Engineering and Survey-Exchange Standard Engineering Data Format: Standard Engineering Data Format". Transportation Research Record: Journal of the Transportation Research Board 1675, nr 1 (styczeń 1999): 75–83. http://dx.doi.org/10.3141/1675-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kralev, Velin, Radoslava Kraleva i Petia Koprinkova-Hristova. "Data modelling and data processing generated by human eye movements". International Journal of Electrical and Computer Engineering (IJECE) 11, nr 5 (1.10.2021): 4345. http://dx.doi.org/10.11591/ijece.v11i5.pp4345-4352.

Pełny tekst źródła
Streszczenie:
Data modeling and data processing are important activities in any scientific research. This research focuses on the modeling of data and processing of data generated by a saccadometer. The approach used is based on the relational data model, but the processing and storage of the data is done with client datasets. The experiments were performed with 26 randomly selected files from a total of 264 experimental sessions. The data from each experimental session was stored in three different formats, respectively text, binary and extensible markup language (XML) based. The results showed that the text format and the binary format were the most compact. Several actions related to data processing were analyzed. Based on the results obtained, it was found that the two fastest actions are respectively loading data from a binary file and storing data into a binary file. In contrast, the two slowest actions were storing the data in XML format and loading the data from a text file, respectively. Also, one of the time-consuming operations turned out to be the conversion of data from text format to binary format. Moreover, the time required to perform this action does not depend in proportion on the number of records processed.
Style APA, Harvard, Vancouver, ISO itp.
7

Kissler-Patig, M., Y. Copin, P. Ferruit, A. Pécontal-Rousset i M. M. Roth. "The Euro3D data format: A common FITS data format for integral field spectrographs". Astronomische Nachrichten 325, nr 2 (luty 2004): 159–62. http://dx.doi.org/10.1002/asna.200310200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

De Grande, Pablo. "El formato Redatam / The Redatam format". Estudios Demográficos y Urbanos 31, nr 3 (1.09.2016): 811. http://dx.doi.org/10.24201/edu.v31i3.15.

Pełny tekst źródła
Streszczenie:
El paquete estadístico Redatam es un software desarrollado por la CEPAL y utilizado ampliamente en los países de América para la difusión de estadísticas censales. Aunque es de uso gratuito, su código no es abierto y la estructura del formato utilizado para alojar la información no es pública. En este artículo se presentan resultados de un trabajo de investigación sobre la estructura de datos de esta herramienta. Entre ellos se destacan: a) una especificación preliminar del formato Redatam, b) la publicación de una herramienta para la exportación de bases de datos Redatam y c) la evidencia respecto de que, contrariando lo establecido en la documentación técnica, el software no implementa estrategias de compresión y de encriptación de los microdatos por él almacenados.AbstractThe Redatam statistical package is a software package developed by ECLAC and widely used in countries of America for the dissemination of census statistics. Although it is free to use, it is licensed as proprietary software (not open source) and stores its data in a non-public format. This article introduces research results describing the data structure used by this software. They include: a) a preliminary specification of the Redatam format, b) a tool for accessing and exporting its databases, and c) the evidence that –contrary to what the technical documentation states– Redatam does not implement strategies for compression and encryption of the microdata it stores.
Style APA, Harvard, Vancouver, ISO itp.
9

Bennett, Brett. "A computer program to convert SEG-2 data to SEG-Y". GEOPHYSICS 55, nr 9 (wrzesień 1990): 1272–84. http://dx.doi.org/10.1190/1.1442943.

Pełny tekst źródła
Streszczenie:
Recent introduction of the SEG-2 data format to the geophysical community creates compatibility problems with existing seismic data formats. Presented here is a computer program (SEG2SEGY.C) that converts seismic data from SEG-2 format to SEG-Y format. The discussion of the program architecture assumes the reader has a working knowledge of SEG-2, SEG-Y, and C programming language.
Style APA, Harvard, Vancouver, ISO itp.
10

Plase, Daiga, Laila Niedrite i Romans Taranovs. "A Comparison of HDFS Compact Data Formats: Avro Versus Parquet". Mokslas - Lietuvos ateitis 9, nr 3 (4.07.2017): 267–76. http://dx.doi.org/10.3846/mla.2017.1033.

Pełny tekst źródła
Streszczenie:
In this paper, file formats like Avro and Parquet are compared with text formats to evaluate the performance of the data queries. Different data query patterns have been evaluated. Cloudera’s open-source Apache Hadoop distribution CDH 5.4 has been chosen for the experiments presented in this article. The results show that compact data formats (Avro and Parquet) take up less storage space when compared with plain text data formats because of binary data format and compression advantage. Furthermore, data queries from the column based data format Parquet are faster when compared with text data formats and Avro.
Style APA, Harvard, Vancouver, ISO itp.
11

Utama Siahaan, Andysah Putera. "Data Security Techniques Using Square Block Keys in Text Format". International Journal of Research and Review 10, nr 2 (11.02.2023): 354–58. http://dx.doi.org/10.52403/ijrr.20230244.

Pełny tekst źródła
Streszczenie:
The text format is a character format that can be read directly with any software. This format is used in sending data to facilitate sending and speed up sending data to message recipients. The text format is the message storage format with he least security because it can be directly read by the person who gets the message. The author wants to build a cryptographic application that can secure text formats using the character transposition technique. Character transposition is used based on the formation of a square matrix which is used as a key in the encryption and decryption process. The size of the matrix used is 4 x 4. The results of the study found that the text format was successfully transformed into ciphertext so as to avoid the possibility of data theft. Keywords: square, security, block, encryption
Style APA, Harvard, Vancouver, ISO itp.
12

Krischer, Lion, James Smith, Wenjie Lei, Matthieu Lefebvre, Youyi Ruan, Elliott Sales de Andrade, Norbert Podhorszki, Ebru Bozdağ i Jeroen Tromp. "An Adaptable Seismic Data Format". Geophysical Journal International 207, nr 2 (25.08.2016): 1003–11. http://dx.doi.org/10.1093/gji/ggw319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Singer, Stephen W., i Curtis L. Meinert. "Format-independent data collection forms". Controlled Clinical Trials 16, nr 6 (grudzień 1995): 363–76. http://dx.doi.org/10.1016/s0197-2456(95)00016-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Nakajima, Masayuki. "Digital Image Data Format DDES." Journal of the Institute of Television Engineers of Japan 45, nr 11 (1991): 1433–36. http://dx.doi.org/10.3169/itej1978.45.1433.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Ray, Subhasis, Chaitanya Chintaluri, Upinder S. Bhalla i Daniel K. Wójcik. "NSDF: Neuroscience Simulation Data Format". Neuroinformatics 14, nr 2 (19.11.2015): 147–67. http://dx.doi.org/10.1007/s12021-015-9282-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Imber, M. "CASE Data Interchange Format standards". Information and Software Technology 33, nr 9 (listopad 1991): 647–55. http://dx.doi.org/10.1016/0950-5849(91)90038-d.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yu, Cong, i Zhihong Yao. "XML-Based DICOM Data Format". Journal of Digital Imaging 23, nr 2 (28.01.2009): 192–202. http://dx.doi.org/10.1007/s10278-008-9173-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Cai, Dong Gen, i Tian Rui Zhou. "Research on CAD Model Data Conversion for RP Technology". Advanced Materials Research 314-316 (sierpień 2011): 2253–58. http://dx.doi.org/10.4028/www.scientific.net/amr.314-316.2253.

Pełny tekst źródła
Streszczenie:
The data processing and conversion plays an important role in RP processes in which the choice of data format determines data processing procedure and method. In this paper, the formats and features of commonly used interface standards such as STL, IGES and STEP are introduced, and the data conversion experiments of CAD models are carried out based on Pro/E system in which the conversion effects of different data formats are compared and analyzed, and the most reasonable data conversion format is proposed.
Style APA, Harvard, Vancouver, ISO itp.
19

Rizzi, Andrea, Giovanni Petrucciani i Marco Peruzzi. "A further reduction in CMS event data for analysis: the NANOAOD format". EPJ Web of Conferences 214 (2019): 06021. http://dx.doi.org/10.1051/epjconf/201921406021.

Pełny tekst źródła
Streszczenie:
A new event data format has been designed and prototyped by the CMS collaboration to satisfy the needs of a large fraction of physics analyses (at least 50%) with a per event size of order 1 kB. This new format is more than a factor of 20 smaller than the MINIAOD format and contains only top level information typically used in the last steps of the analysis. The talk will review the current analysis strategy from the point of view of event format in CMS (both skims and formats such as RECO, AOD, MINIAOD, NANOAOD) and will describe the design guidelines for the new NANOAOD format.
Style APA, Harvard, Vancouver, ISO itp.
20

Arienzo, Alberto, Bruno Aiazzi, Luciano Alparone i Andrea Garzelli. "Reproducibility of Pansharpening Methods and Quality Indexes versus Data Formats". Remote Sensing 13, nr 21 (31.10.2021): 4399. http://dx.doi.org/10.3390/rs13214399.

Pełny tekst źródła
Streszczenie:
In this work, we investigate whether the performance of pansharpening methods depends on their input data format; in the case of spectral radiance, either in its original floating-point format or in an integer-packed fixed-point format. It is theoretically proven and experimentally demonstrated that methods based on multiresolution analysis are unaffected by the data format. Conversely, the format is crucial for methods based on component substitution, unless the intensity component is calculated by means of a multivariate linear regression between the upsampled bands and the lowpass-filtered Pan. Another concern related to data formats is whether quality measurements, carried out by means of normalized indexes depend on the format of the data on which they are calculated. We will focus on some of the most widely used with-reference indexes to provide a novel insight into their behaviors. Both theoretical analyses and computer simulations, carried out on GeoEye-1 and WorldView-2 datasets with the products of nine pansharpening methods, show that their performance does not depend on the data format for purely radiometric indexes, while it significantly depends on the data format, either floating-point or fixed-point, for a purely spectral index, like the spectral angle mapper. The dependence on the data format is weak for indexes that balance the spectral and radiometric similarity, like the family of indexes, Q2n, based on hypercomplex algebra.
Style APA, Harvard, Vancouver, ISO itp.
21

Moore, Josh, Chris Allan, Sébastien Besson, Jean-Marie Burel, Erin Diel, David Gault, Kevin Kozlowski i in. "OME-NGFF: a next-generation file format for expanding bioimaging data-access strategies". Nature Methods 18, nr 12 (29.11.2021): 1496–98. http://dx.doi.org/10.1038/s41592-021-01326-w.

Pełny tekst źródła
Streszczenie:
AbstractThe rapid pace of innovation in biological imaging and the diversity of its applications have prevented the establishment of a community-agreed standardized data format. We propose that complementing established open formats such as OME-TIFF and HDF5 with a next-generation file format such as Zarr will satisfy the majority of use cases in bioimaging. Critically, a common metadata format used in all these vessels can deliver truly findable, accessible, interoperable and reusable bioimaging data.
Style APA, Harvard, Vancouver, ISO itp.
22

Van Horik, René, i Dirk Roorda. "Migration to Intermediate XML for Electronic Data (MIXED): Repository of Durable File Format Conversions". International Journal of Digital Curation 6, nr 2 (25.07.2011): 245–52. http://dx.doi.org/10.2218/ijdc.v6i2.200.

Pełny tekst źródła
Streszczenie:
Data Archiving and Networked Services (DANS), the Dutch scientific data archive for the social sciences and humanities, is engaged in the Migration to Intermediate XML for Electronic Data (MIXED) project to develop open source software that implements the smart migration strategy concerning the long-term archiving of file formats. Smart migration concerns the conversion upon ingest of specific kinds of data formats, such as spreadsheets and databases, to an intermediate XML formatted file. It is assumed that the long-term curation of the XML files is much less problematic than the migration of binary source files and that the intermediate XML file can be converted in an efficient way to file formats that are common in the future. The features of the intermediate XML files are stored in the so-called Standard Data Formats for Preservation (SDFP) specification. This XML schema can be considered an umbrella as it contains existing formal descriptions of file formats developed by others. SDFP also contain schemata developed by DANS, for example, a schema for file-oriented databases. It can be used, for example, for the binary DataPerfect format, that was used on a large scale about twenty years ago, and for which no existing XML schema could be found. The software developed in the MIXED project has been set up as a generic framework, together with a number of plug-ins. It can be considered as a repository of durable file format conversions. This paper contains an overview of the results of the MIXED project.
Style APA, Harvard, Vancouver, ISO itp.
23

Evgin, Alexander Aleksandrovich, Mikhail Aleksandrovich Solovev i Vartan Andronikovich Padaryan. "Model and declarative specification language of binary data formats". Proceedings of the Institute for System Programming of the RAS 33, nr 6 (2021): 27–50. http://dx.doi.org/10.15514/ispras-2021-33(6)-3.

Pełny tekst źródła
Streszczenie:
A number of tasks related to binary data formats include the tasks of parsing, generating and сonjoint code and data analysis. A key element for all of these tasks is a universal data format model. This paper proposes an approach to modeling binary data formats. The described model is expressive enough to specify the most common data formats. The distinctive feature of the model its flexibility in specifying field locations, as well as the ability to describe external fields, which do not resolve into detailed structure during parsing. Implemented infrastructure allows to create and modify a model using application programming interfaces. An algorithm is proposed for parsing binary data by a model, based on the concept of computability of fields. The paper also presents a domain-specific language for data format specification. The specified formats and potential applications of the model for programmatic analysis of formatted data are indicated.
Style APA, Harvard, Vancouver, ISO itp.
24

Yuferov, Anatoliy G. "Infological models of the ENDF-format nuclear data". Nuclear Energy and Technology 5, nr 1 (20.03.2019): 53–59. http://dx.doi.org/10.3897/nucet.5.33984.

Pełny tekst źródła
Streszczenie:
Issues involved in the infologic modeling of the ENDF-format nuclear data libraries for the purpose of converting ENDF files into a relational database have been considered. The transfer to a relational format will make it possible to use standard readily available tools for nuclear data processing which simplify the conversion and operation of this data array. Infological models have been described using formulas of the “Entity (List of Attributes)” type. The proposed infological formulas are based on the physical nature of data and theoretical relations. This eliminates the need for a special notation to be introduced to describe the structure and the content of data, which, in turn, facilitates the use of relational formats in codes and solution of nuclear data evaluation problems. The concept of nuclear informatics has been formulated based on relational DBMS technologies as one of the tools for solving the “big data” problem in modern science and technology. The organizational and technological grounds for the transfer of ENDF libraries to a relational format are presented. Requirements to the nuclear data presentation formats supported by relational DBMS are listed. Peculiarities of the infological model construction, conditioned by the hierarchical nature of nuclear data, are identified. The sequence for the ENDF metadata saving is presented, which can be useful for the verification and validation (testing of the structural and syntactical validity and operability) of both source data and the procedures for the conversion to a relational format. Formulas of infological models are presented for the cross sections file, the secondary neutron energy distributions file, and the nuclear reaction product energy-angle distributions file. A complete array of infological models for ENDF libraries and the generation modules of respective relational tables are available on a public website.
Style APA, Harvard, Vancouver, ISO itp.
25

Bernhard, Andreas, i Kurniawan Hagi. "How a thief stole your data over network". International Journal of Applied Business and Information Systems 1, nr 2 (5.01.2018): 13–18. http://dx.doi.org/10.31763/ijabis.v1i2.4.

Pełny tekst źródła
Streszczenie:
Until now, lot of companies that already have data that is so large and the development of data is so quickly and with the plurality of data formats. Not only in units hundreds of Mega or Giga again but have reached the realm of Tera data countless. Besides the diversity of the data format itself, today not just formatted text but can be video or image as well as the format of other data. With the trend of data continues to grow, sometimes aspect of data security was often overlooked and even ignored.
Style APA, Harvard, Vancouver, ISO itp.
26

Ferguson, J. Scott, i Daniel A. Chayes. "A generic swath‐mapping data format". Marine Geodesy 15, nr 2-3 (styczeń 1992): 129–40. http://dx.doi.org/10.1080/01490419209388049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Ho, Wei-hsin, i Ge-wen Lee. "Geographic Data Exchange Format in Taiwan". Journal of Surveying Engineering 122, nr 3 (sierpień 1996): 114–31. http://dx.doi.org/10.1061/(asce)0733-9453(1996)122:3(114).

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Nyberg, Karl. "Parsing hierarchical data format (HDF) files". ACM SIGAda Ada Letters 30, nr 2 (30.08.2010): 19–24. http://dx.doi.org/10.1145/2593988.2593990.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Fuchs, Karl. "Uniform seismic data recording format explored". Eos, Transactions American Geophysical Union 74, nr 37 (1993): 421. http://dx.doi.org/10.1029/93eo00472.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Koch, Douglas D., Thomas Kohnen, Stephen A. Obstbaum i Emanuel S. Rosen. "Format for reporting refractive surgical data". Journal of Cataract & Refractive Surgery 24, nr 3 (marzec 1998): 285–87. http://dx.doi.org/10.1016/s0886-3350(98)80305-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Smith, Jack, Jake Johnson, James Schubert i Randy Widell. "A New Format for Polysomnography Data". Sleep 28, nr 11 (listopad 2005): 1473. http://dx.doi.org/10.1093/sleep/28.11.1473.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Kemp, Bob, i Marco Roessen. "European Data Format Now Supports Video". Sleep 36, nr 7 (lipiec 2013): 1111. http://dx.doi.org/10.5665/sleep.2822.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Fu, Y. Q., N. K. A. Bryan, O. A. San i L. B. Hong. "Data Format Transferring for FIB Microfabrication". International Journal of Advanced Manufacturing Technology 16, nr 8 (3.07.2000): 600–602. http://dx.doi.org/10.1007/s001700070050.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Ge, Jiang Hua, Guo An Gao, Ya Ping Wang i Chang Bao Zhou. "The Design of Software Platform of Data Transformation and Integration Based on J2EE Technology". Applied Mechanics and Materials 10-12 (grudzień 2007): 563–67. http://dx.doi.org/10.4028/www.scientific.net/amm.10-12.563.

Pełny tekst źródła
Streszczenie:
It conceived the concept of data whole-life management in the environment of enterprise integration. Facing many different data storage systems and data types, and different storage formats of one kind of data, it designs an extensible and function-open system integration platform based on J2EE technology. Through its intelligent management of data storage system, the system shields the different storage systems from its ultimate users. And through the function models with the functionality of transforming data stored in one format into the one stored in another format deployed onto it, the system can manage different storage formats and make them transparent to users and realizes Data Transformation Management. Through the universal format described in XML transformed from other different formats by function models, it makes the internal structure of one data ventilate to the out world and realizes Data Integration, as the basis of Business Integration.
Style APA, Harvard, Vancouver, ISO itp.
35

Kemp, Bob, i Jesus Olivan. "European data format ‘plus’ (EDF+), an EDF alike standard format for the exchange of physiological data". Clinical Neurophysiology 114, nr 9 (wrzesień 2003): 1755–61. http://dx.doi.org/10.1016/s1388-2457(03)00123-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Yu, Q., P. Helmholz, D. Belton i G. West. "Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4 (23.04.2014): 335–40. http://dx.doi.org/10.5194/isprsarchives-xl-4-335-2014.

Pełny tekst źródła
Streszczenie:
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
Style APA, Harvard, Vancouver, ISO itp.
37

Saleh Aloufi, Khalid. "Generating RDF resources from web open data portals". Indonesian Journal of Electrical Engineering and Computer Science 16, nr 3 (1.12.2019): 1521. http://dx.doi.org/10.11591/ijeecs.v16.i3.pp1521-1529.

Pełny tekst źródła
Streszczenie:
<span>Open data are available from various private and public institutions in different resource formats. There are already great number of open data that are published using open data portals, where datasets and resources are mainly presented in tabular or sheet formats. However, such formats have some barriers with application developments and web standards. One of the web recommenced standards for semantic web application is RDF. There are various research efforts have been focused on presenting open data in RDF formats. However, no framework has transformed tabular open data into RDFs considering the HTML tags and properties of the resources and datasets. Therefore, a methodology is required to generate RDF resources from this type of open data resources. This methodology applies data transformations of open data from a tabular format to RDF files for the Saudi Open Data Portal. The methodology successfully transforms open data resources in sheet format into RDF resources. Recommendations and future work are given to enhance the development of building open data.</span>
Style APA, Harvard, Vancouver, ISO itp.
38

Raugh, Anne, i J. Steven Hughes. "The Road to an Archival Data Format—Data Structures". Planetary Science Journal 2, nr 5 (28.09.2021): 204. http://dx.doi.org/10.3847/psj/ac1f22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

FUKUDA, Ken. "BioPAX: A Standard Data Format for Pathway Data Exchange". Seibutsu Butsuri 47, nr 3 (2007): 179–84. http://dx.doi.org/10.2142/biophys.47.179.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Gagliardi, Dimitri. "Material data matter — Standard data format for engineering materials". Technological Forecasting and Social Change 101 (grudzień 2015): 357–65. http://dx.doi.org/10.1016/j.techfore.2015.09.015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Bogatu, Alex, Norman W. Paton, Alvaro A. A. Fernandes i Martin Koehler. "Towards Automatic Data Format Transformations: Data Wrangling at Scale". Computer Journal 62, nr 7 (1.12.2018): 1044–60. http://dx.doi.org/10.1093/comjnl/bxy118.

Pełny tekst źródła
Streszczenie:
Abstract Data wrangling is the process whereby data are cleaned and integrated for analysis. Data wrangling, even with tool support, is typically a labour intensive process. One aspect of data wrangling involves carrying out format transformations on attribute values, for example so that names or phone numbers are represented consistently. Recent research has developed techniques for synthesizing format transformation programs from examples of the source and target representations. This is valuable, but still requires a user to provide suitable examples, something that may be challenging in applications in which there are huge datasets or numerous data sources. In this paper, we investigate the automatic discovery of examples that can be used to synthesize format transformation programs. In particular, we propose two approaches to identifying candidate data examples and validating the transformations that are synthesized from them. The approaches are evaluated empirically using datasets from open government data.
Style APA, Harvard, Vancouver, ISO itp.
42

Pence, William D., François Ochsenbein, Donald C. Wells, Steven W. Allen, Mark R. Calabretta, Lucio Chiappetti, Daniel Durand i in. "DIVISION XII / COMMISSION 5 / WORKING GROUP FITS DATA FORMAT". Proceedings of the International Astronomical Union 4, T27A (grudzień 2008): 366–68. http://dx.doi.org/10.1017/s1743921308025891.

Pełny tekst źródła
Streszczenie:
The Working Group FITS (WG-FITS) is the international control authority for the Flexible Image Transport System (FITS) data format. The WG-FITS was formed in 1988 by a formal resolution of the IAU XX General Assembly in Baltimore (MD, USA), 1988, to maintain the existing FITS standards and to approve future extensions to FITS.
Style APA, Harvard, Vancouver, ISO itp.
43

Belov, Vladimir, Alexander N. Kosenkov i Evgeny Nikulchev. "Experimental Characteristics Study of Data Storage Formats for Data Marts Development within Data Lakes". Applied Sciences 11, nr 18 (17.09.2021): 8651. http://dx.doi.org/10.3390/app11188651.

Pełny tekst źródła
Streszczenie:
One of the most popular methods for building analytical platforms involves the use of the concept of data lakes. A data lake is a storage system in which the data are presented in their original format, making it difficult to conduct analytics or present aggregated data. To solve this issue, data marts are used, representing environments of stored data of highly specialized information, focused on the requests of employees of a certain department, the vector of an organization’s work. This article presents a study of big data storage formats in the Apache Hadoop platform when used to build data marts.
Style APA, Harvard, Vancouver, ISO itp.
44

Otter, Martin. "Signal Tables: An Extensible Exchange Format for Simulation Data". Electronics 11, nr 18 (6.09.2022): 2811. http://dx.doi.org/10.3390/electronics11182811.

Pełny tekst źródła
Streszczenie:
This article introduces Signal Tables as a format to exchange data associated with simulations based on dictionaries and multi-dimensional arrays. Typically, simulation results, as well as model parameters, reference signals, table-based input signals, measurement data, look-up tables, etc., can be represented by a Signal Table. Applications can extend the format to add additional data and metadata/attributes, for example, as needed for a credible simulation process. The format follows a logical view based on a few data structures that can be directly mapped to data structures available in programming languages such as Julia, Python, and Matlab. These data structures can be conveniently used for pre- and post-processing in these languages. A Signal Table can be stored on file by mapping the logical view to available textual or binary persistent file formats, for example, JSON, HDF5, BSON, and MessagePack. A subset of a Signal Table can be imported in traditional tables, for example, in Excel, CSV, pandas, or DataFrames.jl, by flattening multi-dimensional arrays and not storing parameters. The format has been developed and evaluated with the Open Source Julia packages SignalTables.jl and Modia.jl.
Style APA, Harvard, Vancouver, ISO itp.
45

Liao, Lang, Yong Hong Huang i Chang Jiang Zhao. "Data Storm Monitoring System of Large Power Industrial Control Network under Non Uniform Format". Advanced Materials Research 989-994 (lipiec 2014): 2970–74. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2970.

Pełny tekst źródła
Streszczenie:
In the high power industrial control network, due to the large user base, the data traffic is plodding, the data formats are inconsistent, the network data storm is difficult to monitor. A kind of high power industrial control network data monitoring and control system is designed under non-uniform formats. The network node information collector is designed, the data is collected, and the data collection result is taken as the mathematical expectation, the mathematical expectation is trained. The problem of data flow storm statistics is solved. In the system, the format conversion function is added, data storm monitoring results is taken with the format conversion, and they are stored as text format. The monitoring program reads and displays the result, the difficulties brought by the format is not unified are solved. System test results show that the system can monitor the large power industrial control network data storm, the monitoring result is precise, and it can make the formatting process fast. A size of the generated result file is 18.1%, of the original file, and can fully reflect the network data storm characteristics, the effect is perfect.
Style APA, Harvard, Vancouver, ISO itp.
46

ASIABANPOUR, BAHRAM, ALIREZA MOKHTAR, MOHAMMAD HAYASI, ALI KAMRANI i EMAD ABOUEL NASR. "AN OVERVIEW ON FIVE APPROACHES FOR TRANSLATING CAD DATA INTO MANUFACTURING INFORMATION". Journal of Advanced Manufacturing Systems 08, nr 01 (czerwiec 2009): 89–114. http://dx.doi.org/10.1142/s0219686709001687.

Pełny tekst źródła
Streszczenie:
All Rapid Prototyping and CNC material removal processes use information which is extracted from a CAD system. There are several ways to convert CAD data into usable manufacturing information. In this paper, five methods of translating CAD data into a usable manufacturing format are explained. These five methods are data translation from CAD files in STL, DXF, STEP-NC, and IGES formats as well as a platform-dependent area method of manufacturing information in a desirable format. For each method, algorithms and details about the CAD data translation into usable manufacturing and prototyping processes formats are presented. Finally, applications of each approach and its pros and cons are summarized in a table.
Style APA, Harvard, Vancouver, ISO itp.
47

Belov, Vladimir, Andrey Tatarintsev i Evgeny Nikulchev. "Choosing a Data Storage Format in the Apache Hadoop System Based on Experimental Evaluation Using Apache Spark". Symmetry 13, nr 2 (26.01.2021): 195. http://dx.doi.org/10.3390/sym13020195.

Pełny tekst źródła
Streszczenie:
One of the most important tasks of any platform for big data processing is storing the data received. Different systems have different requirements for the storage formats of big data, which raises the problem of choosing the optimal data storage format to solve the current problem. This paper describes the five most popular formats for storing big data, presents an experimental evaluation of these formats and a methodology for choosing the format. The following data storage formats will be considered: avro, CSV, JSON, ORC, parquet. At the first stage, a comparative analysis of the main characteristics of the studied formats was carried out; at the second stage, an experimental evaluation of these formats was prepared and carried out. For the experiment, an experimental stand was deployed with tools for processing big data installed on it. The aim of the experiment was to find out characteristics of data storage formats, such as the volume and processing speed for different operations using the Apache Spark framework. In addition, within the study, an algorithm for choosing the optimal format from the presented alternatives was developed using tropical optimization methods. The result of the study is presented in the form of a technique for obtaining a vector of ratings of data storage formats for the Apache Hadoop system, based on an experimental assessment using Apache Spark.
Style APA, Harvard, Vancouver, ISO itp.
48

Marnat, L., C. Gautier, C. Colin i G. Gesquière. "PY3DTILERS: AN OPEN SOURCE TOOLKIT FOR CREATING AND MANAGING 2D/3D GEOSPATIAL DATA". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W3-2022 (14.10.2022): 165–72. http://dx.doi.org/10.5194/isprs-annals-x-4-w3-2022-165-2022.

Pełny tekst źródła
Streszczenie:
Abstract. In recent years, the production of 3D geospatial data using formats such as IFC, CityGML and GeoJSON, has increased. Visualizing this data on the web requires solving a variety of problems, such as the massive amount of 3D objects to be visualized at the same time and the creation of geometry suitable for a 3D viewer. Cesium and OGC introduced the 3D Tiles format in 2015 to solve these issues. They have created a specific format optimized for streaming and rendering 3D geospatial content, based on the glTF format developed by Khronos. The recency of the 3D Tiles format implies the need to experiment around this format and to test its interoperability with other geospatial and urban data formats. There is also the will to innovate on the organization of 3D objects in order to offer a better control on the visualization. Therefore, there is a need for an open source tool capable of converting 3D geospatial data into 3D Tiles to visualize them on the web, but also to test and develop new methods of spatial clustering and creating Levels of Detail (LoD) of urban objects. We propose Py3DTilers in this paper, an open source tool to convert and manipulate 3D Tiles from the most common 3D geospatial data models: CityGML, IFC, OBJ, and GeoJSON. With this tool, we ensure that the generated 3D Tiles respect the specification described by the OGC, in order to be used in various viewers. We provide a generic solution for spatially organizing objects and for creating LoDs, while allowing the community to customize these methods to go further in finding efficient solutions for visualizing geospatial objects on the web.
Style APA, Harvard, Vancouver, ISO itp.
49

Moldovan, Grigore, i Michael Zabel. "Quantitative Image Format for Electron Microscopy". Microscopy and Microanalysis 26, S2 (30.07.2020): 1176–78. http://dx.doi.org/10.1017/s1431927620017225.

Pełny tekst źródła
Streszczenie:
AbstractExperimental data, simulation, data analysis and visualisation require image file formats that are open source and able to contain and manage quantitative data. Quantification techniques bring the new challenge of managing image calibration parameters and formulas in an open and efficient format, compatible with routine microscopy workflows. A practical approach to quantitative image format is presented and discussed here, relying on open and extensible file formats - Tagged Image File (TIF) and Extensible Metadata Platform (XMP).
Style APA, Harvard, Vancouver, ISO itp.
50

Mazzotti, Diego, Bethany Staley, Brendan Keenan, Allan Pack, Richard Schwab i Mary Regina Boland. "399 Using Machine Learning to Inform Extraction of Clinical Data from Sleep Study Reports". Sleep 44, Supplement_2 (1.05.2021): A158—A159. http://dx.doi.org/10.1093/sleep/zsab072.398.

Pełny tekst źródła
Streszczenie:
Abstract Introduction In-laboratory and home sleep studies are important tools for diagnosing sleep disorders. However, a limited amount of measurements is used to inform disease severity and only specific measures, if any, are stored as structured fields into electronic health records (EHR). We propose a sleep study data extraction approach based on supervised machine learning to facilitate the development of specialized format-specific parsers for large-scale automated sleep data extraction. Methods Using retrospective data from the Penn Medicine Sleep Center, we identified 64,100 sleep study reports stored in Microsoft Word documents of varying formats, recorded from 2001–2018. A random sample of 200 reports was selected for manual annotation of formats (e.g., layout) and type (e.g. baseline, split-night, home sleep apnea tests). Using text mining tools, we extracted 71 document property features (e.g., section dimensions, paragraph and table elements, regular expression matches). We identified 14 different formats and 7 study types. We used these manual annotations as multiclass outcomes in a random forest classifier to evaluate prediction of sleep study format and type using document property features. Out-of-bag (OOB) error rates and multiclass area under the receiver operating curve (mAUC) were estimated to evaluate training and testing performance of each model. Results We successfully predicted sleep study format and type using random forest classifiers. Training OOB error rate was 5.6% for study format and 8.1% for study type. When evaluating these models in independent testing data, the mAUC for classification of study format was 0.85 and for study type was 1.00. When applied to the large universe of diagnostic sleep study reports, we successfully extracted hundreds of discrete fields in 38,252 reports representing 33,696 unique patients. Conclusion We accurately classified a sample of sleep study reports according to their format and type, using a random forest multiclass classification method. This informed the development and successful deployment of custom data extraction tools for sleep study reports. The ability to leverage these data can improve understanding of sleep disorders in the clinical setting and facilitate implementation of large-scale research studies within the EHR. Support (if any) American Heart Association (20CDA35310360).
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii