Journal articles on the topic 'Standard exchange formats'

To see the other types of publications on this topic, follow the link: Standard exchange formats.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Standard exchange formats.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Egerton, R. F., D. S. Bright, S. D. Davilla, P. Ingram, E. J. Kirkland, M. Kundmann, C. E. Lyman, P. Rez, E. Steele, and N. J. Zaluzec. "Standard formats for the exchange and storage of image data." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 220–21. http://dx.doi.org/10.1017/s0424820100146941.

Full text
Abstract:
In microscopy, there is an increasing need for images to be recorded electronically and stored digitally on disk or tape. This image data can be shared by mailing these magnetic media or by electronic transmission along telephone lines (e.g. modem transfer) or special networks, such as Bitnet and Internet. In each case, the format in which the image is stored or transmitted must be known to the recipient in order to correctly recover all the information. Because there are many image formats to choose from, it would undoubtedly save misunderstanding and frustration if a group of individuals with similar interests and needs could agree upon a common format. The MSA Standards Committee has surveyed several formats which could be of particular interest to microscopists, with a view to making a recommendation to our community.Our chief concern has been compatibility with existing software, combined with an adequate representation of the data, compactness of data storage (on disk) and reasonable rate of data transfer.
APA, Harvard, Vancouver, ISO, and other styles
2

Bergmann, Frank T., Nicolas Rodriguez, and Nicolas Le Novère. "COMBINE Archive Specification Version 1." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 104–18. http://dx.doi.org/10.1515/jib-2015-261.

Full text
Abstract:
Summary Several standard formats have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result.The Open Modeling EXchange format (OMEX) supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, an optional metadata file, and the files describing the model. The manifest is an XML file listing all files included in the archive and their type. The metadata file provides additional information about the archive and its content. Although any format can be used, we recommend an XML serialization of the Resource Description Framework.Together with the other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails.
APA, Harvard, Vancouver, ISO, and other styles
3

Lenivtceva, Iuliia D., and Georgy Kopanitsa. "Evaluating Manual Mappings of Russian Proprietary Formats and Terminologies to FHIR." Methods of Information in Medicine 58, no. 04/05 (November 2019): 151–59. http://dx.doi.org/10.1055/s-0040-1702154.

Full text
Abstract:
Abstract Background Evaluating potential data losses from mapping proprietary medical data formats to standards is essential for decision making. The article implements a method to evaluate the preliminary content overlap of proprietary medical formats, including national terminologies and Fast Healthcare Interoperability Resources (FHIR)—international medical standard. Methods Three types of mappings were evaluated in the article: proprietary format matched to FHIR, national terminologies matched to the FHIR mappings, and concepts from national terminologies matched to Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT). We matched attributes of the formats with FHIR definitions and calculated content overlap. Results The article reports the results of a manual mapping between a proprietary medical format and the FHIR standard. The following results were obtained: 81% of content overlap for the proprietary format to FHIR mapping, 88% of content overlap for the national terminologies to FHIR mapping, and 98.6% of concepts matching can be reached from national terminologies to SNOMED CT mapping. Twenty tables from the proprietary format and 20 dictionaries were matched with FHIR resources; nine dictionaries were matched with SNOMED CT concepts. Conclusion Mapping medical formats is a challenge. The obtained overlaps are promising in comparison with the investigated results. The study showed that standardization of data exchange between proprietary formats and FHIR is possible in Russia, and national terminologies can be used in FHIR-based information systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Youngki, Hanra Lee, Mutahar Safdar, Tahir Abbas Jauhar, and Soonhung Han. "Exchange of parametric assembly models based on neutral assembly constraints." Concurrent Engineering 27, no. 4 (August 20, 2019): 285–94. http://dx.doi.org/10.1177/1063293x19869047.

Full text
Abstract:
It is difficult to exchange parametric assembly models using conventional neutral formats such as the standard for the exchange of product model data or the initial graphics exchange specification. These formats only support the boundary representation information that leads to the inability to perform parametric re-evaluation, once a model is exchanged. In order to exchange parametric information along with the design intent, a design history-based macro-parametric approach was proposed. Our method is macro-parametrics approach, however, supported only the exchange of individual part models. As most of the products are manufactured in assemblies, where several components are connected with multiple constraints, it is necessary to exchange the assembly model data. To overcome the issue of post-exchange editability, a collection of neutral assembly commands was introduced to extend the capabilities of the macro-parametric approach. A set of neutral assembly constraints was defined and a system for exchanging the parametric assembly models was implemented. An assembly model consisting of coaxial and incidence constraints was successfully exchanged between two commercial computer-aided design systems: CATIA and NX. It was possible to re-evaluate the assembly model parametrically after the exchange. The method can be further extended to exchange the remaining constraint types in different commercial computer-aided design systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Azeroual, Otmane, and Nico Herbig. "Mapping and semantic interoperability of the German RCD data model with the Europe-wide accepted CERIF." Information Services & Use 40, no. 1-2 (October 23, 2020): 87–113. http://dx.doi.org/10.3233/isu-200076.

Full text
Abstract:
The provision, processing and distribution of research information are increasingly supported by the use of research information systems (RIS) at higher education institutions. National and international exchange formats or standards can support the validation and use of research information and increase their informative value and comparability through consistent semantics. The formats are very overlapping and represent different approaches to modeling. This paper presents the data model of the Research Core Dataset (RCD) and discusses its impact on data quality in RIS. Subsequently compares it with the Europe-wide accepted Common European Research Information Format (CERIF) standard to support the implementation of the RCD with CERIF compatibility in the RIS and so that institutions integrate their research information from internal and external heterogeneous data sources to ultimately provide valuable information with high levels of data quality. As these are fundamental to decision-making and knowledge generation as well as the presentation of research.
APA, Harvard, Vancouver, ISO, and other styles
6

Taylor, P., S. Cox, G. Walker, D. Valentine, and P. Sheahan. "WaterML2.0: development of an open standard for hydrological time-series data exchange." Journal of Hydroinformatics 16, no. 2 (April 8, 2013): 425–46. http://dx.doi.org/10.2166/hydro.2013.174.

Full text
Abstract:
The increasing global demand on freshwater is resulting in nations improving their terrestrial water monitoring and reporting systems to better understand the availability, and quality, of this valuable resource. A barrier to this is the inability for stakeholders to share information relating to water observations data: traditional hydrological information systems have relied on internal custom data formats to exchange data, leading to issues in data integration and exchange. Organisations are looking to information standards to assist in data exchange, integration and interpretation to lower costs in use, and re-use, of monitoring data. The WaterML2.0 Standards Working Group (SWG), working within the Open Geospatial Consortium (OGC) and in cooperation with the joint OGC-World Meteorological Organisation (WMO) Hydrology Domain Working Group (HDWG), has developed an open standard for the exchange of water observation data. The focus of the standard is time-series data, commonly used for hydrological applications such as flood forecasting, environmental reporting and hydrological infrastructure, where a lack of standards inhibits efficient re-use and automation. This paper describes the development methodology and principles of WaterML2.0, key parts of its information model, implementation scenarios, evaluation and future work. WaterML2.0 was adopted by the OGC as an official standard in September 2012.
APA, Harvard, Vancouver, ISO, and other styles
7

Frøystad, Christian, Inger Tøndel, and Martin Jaatun. "Security Incident Information Exchange for Cloud Service Provisioning Chains." Cryptography 2, no. 4 (December 11, 2018): 41. http://dx.doi.org/10.3390/cryptography2040041.

Full text
Abstract:
Online services are increasingly becoming a composition of different cloud services, making incident-handling difficult, as Cloud Service Providers (CSPs) with end-user customers need information from other providers about incidents that occur at upstream CSPs to inform their users. In this paper, we argue the need for commonly agreed-upon incident information exchanges between providers to improve accountability of CSPs, and present both such a format and a prototype implementing it. The solution can handle simple incident information natively as well as embed standard representation formats for incident-sharing, such as IODEF and STIX. Preliminary interviews show a desire for such a solution. The discussion considers both technical challenges and non-technical aspects related to improving the situation for incident response in cloud-computing scenarios. Our solution holds the potential of making incident-sharing more efficient.
APA, Harvard, Vancouver, ISO, and other styles
8

Goncharov, M. V., and K. A. Kolosov. "On interoperability of metadata within RNPLS&T’s Single Open Information Archive." Scientific and Technical Libraries, no. 10 (November 12, 2021): 45–62. http://dx.doi.org/10.33186/1027-3689-2021-10-45-62.

Full text
Abstract:
Russian National Public Library for Science and Technology has been developing the Single Open Information Archive (UOIA) to merge all digital full-text resources created or acquired by the Library. The authors examine the issues of interoperability when exchanging metadata between UOIA built on library automation software and open archives using OAI-PMH technology for metadata acquisition. Interoperability in information exchange between different ALIS is provided, for example, through applying SRU/SRW protocol and metadata scheme, while metadata exchange between OA repositories is provided mainly within Dublin Core (DC) scheme. ALIS – OA metadata transmission with transformation into DC results in information loss and prevents unambiguous reverse transformation.For a long time, DSpace has been the most popular software for open digital repositories. This product enables OAI-PMH metadata acquisition in DC and Qualified DC (QDC) formats, and supports Object Reuse and Exchange (ORE) standard, which enables to describe aggregated resources. ORE in DSpace enables to collect not only metadata but also connected files and to receive other connected data provided by importing source. DSpace uses rather simple ORE format based on Atom XML that allows binding several files of different functionality with RDF-triplets.The OAI-PMH software connector is designed for RNPLS&T SOIA and enables to present metadata in DC, QDC, MARC21, and ORE formats, which supports interoperability in information exchange with OA repositories with DSpace software. Beside metadata transmission, transmission of various data types is possible, e. g. document text or license information. Further development is to expand format structure to represent associated data, in particular using RDF.
APA, Harvard, Vancouver, ISO, and other styles
9

Safdar, Mutahar, Tahir Abbas Jauhar, Youngki Kim, Hanra Lee, Chiho Noh, Hyebin Kim, Inhwan Lee, Imgyu Kim, Soonjo Kwon, and Soonhung Han. "Feature-based translation of CAD models with macro-parametric approach: issues of feature mapping, persistent naming, and constraint translation." Journal of Computational Design and Engineering 7, no. 5 (April 9, 2020): 603–14. http://dx.doi.org/10.1093/jcde/qwaa043.

Full text
Abstract:
Abstract Feature-based translation of computer-aided design (CAD) models allows designers to preserve the modeling history as a series of modeling operations. Modeling operations or features contain information that is required to modify CAD models to create different variants. Conventional formats, including the standard for the exchange of product model data or the initial graphics exchange specification, cannot preserve design intent and only geometric models can be exchanged. As a result, it is not possible to modify these models after their exchange. Macro-parametric approach (MPA) is a method for exchanging feature-based CAD models among heterogeneous CAD systems. TransCAD, a CAD system for inter-CAD translation, is based on this approach. Translators based on MPA were implemented and tested for exchange between two commercial CAD systems. The issues found during the test rallies are reported and analyzed in this work. MPA can be further extended to remaining features and constraints for exchange between commercial CAD systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Halfawy, Mahmoud R., Dana J. Vanier, and Thomas M. Froese. "Standard data models for interoperability of municipal infrastructure asset management systems." Canadian Journal of Civil Engineering 33, no. 12 (December 1, 2006): 1459–69. http://dx.doi.org/10.1139/l05-098.

Full text
Abstract:
Efficient management of infrastructure assets depends largely on the ability to efficiently share, exchange, and manage asset life-cycle information. Although software tools are used to support almost every asset management process in municipalities, data exchange is mainly performed using paper-based or neutral file formats based on ad hoc proprietary data models. Interoperability of various asset management systems is crucial to support better management of infrastructure data and to improve the information flow between various work processes. Standard data models can be used to significantly improve the availability and consistency of asset data across different software systems, to integrate data across various disciplines, and to exchange information between various stakeholders. This paper surveys a number of data standards that might be used in implementing interoperable and integrated infrastructure asset management systems. The main requirements for standard data models are outlined, and the importance of interoperability from an asset management perspective is highlighted. The role that spatial data and geographic information systems (GIS) can play in enhancing the efficiency of managing asset life-cycle data is also discussed. An ongoing effort to develop a standard data model for sewer systems is presented, and an example implementation of interoperable GIS and hydraulic modeling software is discussed.Key words: data standards, municipal infrastructure, asset management, data models, interoperability.
APA, Harvard, Vancouver, ISO, and other styles
11

Samavi, Reza, Mariano Consens, Shahan Khatchadourian, and Thodoros Topaloglou. "Exploring PSI-MI XML Collections Using DescribeX." Journal of Integrative Bioinformatics 4, no. 3 (December 1, 2007): 123–34. http://dx.doi.org/10.1515/jib-2007-70.

Full text
Abstract:
Summary PSI-MI has been endorsed by the protein informatics community as a standard XML data exchange format for protein-protein interaction datasets. While many public databases support the standard, there is a degree of heterogeneity in the way the proposed XML schema is interpreted and instantiated by different data providers. Analysis of schema instantiation in large collections of XML data is a challenging task that is unsupported by existing tools.In this study we use DescribeX, a novel visualization technique of (semi-)structured XML formats, to quantitatively and qualitatively analyze PSI-MI XML collections at the instance level with the goal of gaining insights about schema usage and to study specific questions such as: adequacy of controlled vocabularies, detection of common instance patterns, and evolution of different data collections. Our analysis shows DescribeX enhances understanding the instance-level structure of PSI-MI data sources and is a useful tool for standards designers, software developers, and PSI-MI data providers.
APA, Harvard, Vancouver, ISO, and other styles
12

Beier, Sebastian, Anne Fiebig, Cyril Pommier, Isuru Liyanage, Matthias Lange, Paul J. Kersey, Stephan Weise, et al. "Recommendations for the formatting of Variant Call Format (VCF) files to make plant genotyping data FAIR." F1000Research 11 (May 19, 2022): 231. http://dx.doi.org/10.12688/f1000research.109080.2.

Full text
Abstract:
In this opinion article, we discuss the formatting of files from (plant) genotyping studies, in particular the formatting of metadata in Variant Call Format (VCF) files. The flexibility of the VCF format specification facilitates its use as a generic interchange format across domains but can lead to inconsistency between files in the presentation of metadata. To enable fully autonomous machine actionable data flow, generic elements need to be further specified. We strongly support the merits of the FAIR principles and see the need to facilitate them also through technical implementation specifications. They form a basis for the proposed VCF extensions here. We have learned from the existing application of VCF that the definition of relevant metadata using controlled standards, vocabulary and the consistent use of cross-references via resolvable identifiers (machine-readable) are particularly necessary and propose their encoding. VCF is an established standard for the exchange and publication of genotyping data. Other data formats are also used to capture variant data (for example, the HapMap and the gVCF formats), but none currently have the reach of VCF. For the sake of simplicity, we will only discuss VCF and our recommendations for its use, but these recommendations could also be applied to gVCF. However, the part of the VCF standard relating to metadata (as opposed to the actual variant calls) defines a syntactic format but no vocabulary, unique identifier or recommended content. In practice, often only sparse descriptive metadata is included. When descriptive metadata is provided, proprietary metadata fields are frequently added that have not been agreed upon within the community which may limit long-term and comprehensive interoperability. To address this, we propose recommendations for supplying and encoding metadata, focusing on use cases from plant sciences. We expect there to be overlap, but also divergence, with the needs of other domains.
APA, Harvard, Vancouver, ISO, and other styles
13

Woollett, Benjamin, Daniel Klose, Richard Cammack, Robert W. Janes, and B. A. Wallace. "JCAMP-DX for circular dichroism spectra and metadata (IUPAC Recommendations 2012)." Pure and Applied Chemistry 84, no. 10 (October 3, 2012): 2171–82. http://dx.doi.org/10.1351/pac-rec-12-02-03.

Full text
Abstract:
Circular dichroism (CD) spectroscopy is a widely used technique for the characterisation of proteins. A number of CD instruments are currently on the market, and there are more than a dozen synchrotron radiation circular dichroism (SRCD) beamlines in operation worldwide. All produce different output formats and contents. In order for users of CD and SRCD data to be able simply to compare and contrast data and the associated recorded or unrecorded metadata, it is essential to have a common data format. For this reason, the JCAMP-DX-CD format for CD spectroscopy has been developed, based on extensive consultations with users and senior representatives of all the instrument manufacturers and beamlines, and under the auspices of IUPAC, based on the Joint Committee on Atomic and Physical Data Exchange protocols. The availability of a common format is also important for deposition to, and access from, the Protein Circular Dichroism Data Bank, the public repository for CD and SRCD data and metadata. The JCAMP-DX-CD format can be read by standard JCAMP programs such as JSpecView. We have also created a series of parsers, available at the DichroJCAMP web site (http://valispec.cryst.bbk.ac.uk/formatConverter/dichroJCAMPDX-CD.html), which will enable the conversion between instrument and beamline formats and the JCAMP-DX-CD format.
APA, Harvard, Vancouver, ISO, and other styles
14

Shen, Zhen Ya, Jun Xiao, Ying Wang, and Hong Jian Sui. "Study and Construction of the Rock Engineering Data Exchange and Sharing Framework." Advanced Materials Research 765-767 (September 2013): 1446–50. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1446.

Full text
Abstract:
Data exchange and sharing is a key issue in the current informatization of rock engineering, and its advancement is constrained by various factors, particularly distributed heterogeneous environment and the lack of uniform data exchange formats. To address this need this paper presents the Rock Engineering data Exchange and sharing Framework (REEF), which adopts service-oriented architecture (SOA) and uses the rock engineering markup language (REML) as the standard exchange language. This REEF framework covers the main functions of data exchange and sharing in its domain, which can be used as the universal solution to reduce the difficulty of the industrial integration of rock engineering, and it provides support to the standardization and digitization in the field.
APA, Harvard, Vancouver, ISO, and other styles
15

Hamill, G. P., R. Jenkins, and W. N. Schreiner. "Standard Database Format for the Dissemination and Storage of Diffraction Data - Task Group Progress Report on JCAMP-DX." Advances in X-ray Analysis 33 (1989): 417–22. http://dx.doi.org/10.1154/s0376030800019844.

Full text
Abstract:
AbstractIn planning for PDF-3, the International Centre for Diffraction Data's full pattern database of raw diffraction data, it is evident that a standard format for storage and exchange of diffraction data is necessary. An evaluation of the JCAMP-DX protocol by a task group* of the International Centre for Diffraction Data has resulted in a set of format cooes specific to X-ray diffraction. The proposed structure of the data is divided into four parts: the minimal component set required by the JCAMP-DX definition (name, data, owner, sample identification, data type, etc.), a minimum item set required to define the X-ray diffraction data, an open selection of requested but not required information on the sample, its preparation and the instrument, and finally the data itself in one of several specified formats. All information stored in JCAMP-DX format is in ASCII characters. Therefore, these data are printable, easily read by the user and compatible with almost any computer or media storage device. Codes defining the information are primarily in shortened, but readable, English. The task group is completing the work on this project and will be presenting its proposals to JCAMP.
APA, Harvard, Vancouver, ISO, and other styles
16

Makisha, Elena. "RuleML-based mechanism of building information models verification." E3S Web of Conferences 132 (2019): 01014. http://dx.doi.org/10.1051/e3sconf/201913201014.

Full text
Abstract:
The issues of automated verification of design results based on information models have been raised recently by scientists and specialists from different countries, including the Russian Federation. Interoperability of expertise based on information models of objects should be provided using open formats for the presentation and exchange of the data. Each software that performs the functions of information modeling, as a rule, stores the results of the work in files of its proprietary format. IFC allows exchanging information about the geometry, attributes and relationships between the elements of information models of capital construction, that is, provides the transfer of all types of information stored by the information model. The MVD (Model View Definition) format is used to specify the subset of data volume that is used to solve a particular problem. RuleML is a system of families of modeling languages web of rules designed for the purpose of uniform presentation and exchange of the main types of web rules and logic between different platforms. BCF (BIM Collaboration Format) is an open standard maintained and distributed by buildingSMART that allows for various information modeling applications sharing information about issues related to IFC models that were previously shared by project participants.
APA, Harvard, Vancouver, ISO, and other styles
17

С.П., Кузин,. "The first results of DORIS RINEX data processing at the INASAN Analysis Center." Научные труды Института астрономии РАН, no. 4 (December 16, 2022): 237–40. http://dx.doi.org/10.51194/inasan.2022.7.4.003.

Full text
Abstract:
В работе приводятся первые результаты обработки DORIS измерений формата RINEX (Receiver Independent Exchange Format) в центре анализа ИНАСАН - нового вида измерений системы DORIS. Приведено сравнение остаточных погрешностей радиальной скорости по результатам обработки измерений для данных формата RINEX и предшествующего формата doris2.2. Среднеквадратические ошибки для спутника Jason-2 на интервале 2008.5-2019.7 для данных формата RINEX и формата doris2.2 равны 0.427 мм/c и 0.404 мм/c, соответственно. Для спутника Cryosat2 на интервале 2021.5-2021.8 среднеквадратические ошибки форматов RINEX и doris2.2 равны 0.508 мм/с и 0.501 мм/с, соответственно. Полученные среднеквадратические ошибки сопоставлены с результатами среднеквадратических ошибок, полученными другими центрами анализа DORIS данных. Результаты сравнения подтверждают правильность применяемой методики обработки DORIS RINEX данных и используемых моделей. The paper presents the rst results of DORIS RINEX (Receiver Independent Exchange Format) data processing at the INASAN Analysis Center - a new type of DORIS system measurements. The comparison of the residuals of the radial velocity of measurement processing for RINEX data and the previous doris2.2 format is given. The RMS for Jason-2 satellite in the interval 2008.5-2019.7 for RINEX format data and doris2.2 format are 0.427 mm/s and 0.404 mm/s, respectively. For the Cryosat2 satellite in the interval 2021.5-2021.8 the RMS of the RINEX and doris2.2 formats are 0.508 mm/s and 0.501 mm/s, respectively. The obtained standard errors are compared with the results of standard errors obtained by other DORIS data analysis centers. The results of the comparison con rm the correctness of the DORIS RINEX data processing methodology and the models used.
APA, Harvard, Vancouver, ISO, and other styles
18

Barbosa, Margarida Jerónimo, Pieter Pauwels, Victor Ferreira, and Luís Mateus. "Towards increased BIM usage for existing building interventions." Structural Survey 34, no. 2 (May 9, 2016): 168–90. http://dx.doi.org/10.1108/ss-01-2015-0002.

Full text
Abstract:
Purpose – Building information modeling (BIM) is most often used for the construction of new buildings. By using BIM in such projects, collaboration among stakeholders in an architecture, engineering and construction project is improved. To even further improve collaboration, there is a move toward the production and usage of BIM standards in various countries. These are typically national documents, including guides, protocols, and mandatory regulations, that introduce guidelines about what information should be exchanged at what time between which partners and in what formats. If a nation or a construction team agrees on these guidelines, improved collaboration can come about on top of the collaboration benefits induced by the mere usage of BIM. This scenario might also be targeted for interventions in existing buildings. The paper aims to discuss these issues. Design/methodology/approach – In this paper, the authors investigate the general content and usage of existing BIM standards for new constructions, describing specifications about BIM deliverable documents, modeling, and collaboration procedures. The authors suggest to what extent the content in the BIM standards can also be used for interventions in existing buildings. These suggestions rely heavily on literature study, supported by on-site use case experiences. Findings – From this research, the authors can conclude that the existing standards give a solid basis for BIM collaboration in existing building interventions, but that they need to be extended in order to be of better use in any intervention project in an existing building. This extension should happen at: data modeling level: other kinds of data formats need to be considered, coming from terrestrial laser scanning and automatic digital photogrammetry tools; at data exchange level: exchange requirements should take explicit statements about modeling tolerances and levels of (un)certainty; and at process modeling level: business process models should include information exchange processes from the very start of the building survey (BIM→facility management→BIM or regular audit). Originality/value – BIM environments are not often used to document existing buildings or interventions in existing buildings. The authors propose to improve the situation by using BIM standards and/or guidelines, and the authors give an initial overview of components that should be included in such a standard and/or guideline.
APA, Harvard, Vancouver, ISO, and other styles
19

Nägele, Daniel, and Patricia Vobl. "Ontology Modelling and Standardized Information Exchange with Content Delivery Applications in Technical Communication." SHS Web of Conferences 102 (2021): 02005. http://dx.doi.org/10.1051/shsconf/202110202005.

Full text
Abstract:
Ontologies are a technology recently used in technical communication (TC) to model information into a multidimensional net. They expand the modelling by taxonomy of metadata in TC. Any kind of relation between multiple classes and instances can be established. These ontologies can appear in the form of semantic correlation rules (SCR), which represent the connection between the metadata of the objects. SCR are used in connection with component content management systems (CCMS), semantic modelling systems (SMS) and content delivery portals (CDP) to deliver the appropriate amount of content in a more precise manner to the end user. In general, Ontology tools, CCMS and CDP are not based on the same ecosystem and therefore, they do not always work together effortlessly. A solution to this problem are exchange formats like the intelligent information Request and Delivery Standard (iiRDS), which enable a standardized information exchange between supported systems. Another solution would be compound information systems (CIS) like ONTOLIS, which combine a CCMS, CDP and SMS all in one. This paper aims to investigate the effect of SCR in the CDP of a CIS like ONTOLIS and to evaluate the use of exchange formats like iiRDS.
APA, Harvard, Vancouver, ISO, and other styles
20

Baidai, Yannick, Jon Uranga, Maitane Grande, Hilario Murua, Josu Santiago, Iñaki Quincoces, Guillermo Boyra, Blanca Orue, Laurent Floch, and Manuela Capello. "A standard processing framework for the location data of satellite-linked buoys on drifting fish aggregating devices." Aquatic Living Resources 35 (2022): 13. http://dx.doi.org/10.1051/alr/2022013.

Full text
Abstract:
Satellite-linked buoys used by tropical tuna purse-seine vessels on drifting fish aggregating devices (DFADs) provide a continuous stream of information on both the ocean characteristics and the presence and size of fish aggregations associated with DFADs, enabling the study of pelagic communities. This unprecedented amount of data is characterized by ocean-scale coverage with high spatial and temporal resolutions, but also by different data formats and specifications depending on buoy model and brand, as well as on the type of data exchange agreements into play. Their use for scientific and management purposes is therefore critically dependent on the abilities of algorithms to process heterogeneous data formats and resolutions. This paper proposes a unified set of algorithms for processing the buoys location data used by the two major purse seine fleets operating in the Atlantic and Indian oceans. Three main issues that need to be addressed prior to the exploitation of the data are identified (structural errors, data records on land and on-board vessels) and five specific filtering criteria are proposed to improve the data cleaning process and, hence, quality. Different filtering procedures are also compared, and their advantages and limitations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Hong, Hai Feng Huang, and Gui Ying Zhu. "Key Technologies Research of the Interoperable Model Verification Based on CIM/E." Applied Mechanics and Materials 411-414 (September 2013): 1826–30. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1826.

Full text
Abstract:
Power Grid in China integration running trend makes EMS(Energy Management System) at all levels of information exchange between the increasingly urgent demand. And CIM is the most important base information. In order to solve the model information differences between different EMS, this paper analyzes the current EMS model information related to the standardization of applications, from standard CIM model, the model file formats and file content model three aspects, based on interoperable standards IEC61970 process-oriented information model calibration and validation process for each part of the key technologies are discussed in detail. Through the practical application of the method validation results show that the calibration method is reasonable and correct.
APA, Harvard, Vancouver, ISO, and other styles
22

Fischer, Bryan R. "A Step Up." Mechanical Engineering 137, no. 03 (March 1, 2015): 42–45. http://dx.doi.org/10.1115/1.2015-mar-3.

Full text
Abstract:
This article presents work done by International Organization for Standardization in providing useful information. The International Organization for Standardization has released a new standard for the exchange of product-model data, ISO 10303-242, which is a new application protocol standard in the STEP family of standards. The standard significantly improves STEP’s capabilities, especially in providing useful information for an enterprise. STEP is commonly used in industry to share 3-D CAD model geometry with organizations using different CAD software with different proprietary data formats. The semantic information of the new standard is computer-interpretable; it can be used in semi-automated and automated systems and is intended to be used by software designed for tolerance analysis, inspection, and manufacturing. The CAx Implementor Forum is an essential part of STEP development. The CAx-IF works in parallel with the AP242 team to develop recommended practices for the implementation of AP242 in CAD and data translation software, to test the data models and best practices, and to provide critical feedback to the AP242 team.
APA, Harvard, Vancouver, ISO, and other styles
23

Deutsch, Eric W., Juan Pablo Albar, Pierre-Alain Binz, Martin Eisenacher, Andrew R. Jones, Gerhard Mayer, Gilbert S. Omenn, Sandra Orchard, Juan Antonio Vizcaíno, and Henning Hermjakob. "Development of data representation standards by the human proteome organization proteomics standards initiative." Journal of the American Medical Informatics Association 22, no. 3 (February 28, 2015): 495–506. http://dx.doi.org/10.1093/jamia/ocv001.

Full text
Abstract:
Abstract Objective To describe the goals of the Proteomics Standards Initiative (PSI) of the Human Proteome Organization, the methods that the PSI has employed to create data standards, the resulting output of the PSI, lessons learned from the PSI’s evolution, and future directions and synergies for the group. Materials and Methods The PSI has 5 categories of deliverables that have guided the group. These are minimum information guidelines, data formats, controlled vocabularies, resources and software tools, and dissemination activities. These deliverables are produced via the leadership and working group organization of the initiative, driven by frequent workshops and ongoing communication within the working groups. Official standards are subjected to a rigorous document process that includes several levels of peer review prior to release. Results We have produced and published minimum information guidelines describing what information should be provided when making data public, either via public repositories or other means. The PSI has produced a series of standard formats covering mass spectrometer input, mass spectrometer output, results of informatics analysis (both qualitative and quantitative analyses), reports of molecular interaction data, and gel electrophoresis analyses. We have produced controlled vocabularies that ensure that concepts are uniformly annotated in the formats and engaged in extensive software development and dissemination efforts so that the standards can efficiently be used by the community. Conclusion In its first dozen years of operation, the PSI has produced many standards that have accelerated the field of proteomics by facilitating data exchange and deposition to data repositories. We look to the future to continue developing standards for new proteomics technologies and workflows and mechanisms for integration with other omics data types. Our products facilitate the translation of genomics and proteomics findings to clinical and biological phenotypes. The PSI website can be accessed at http://www.psidev.info.
APA, Harvard, Vancouver, ISO, and other styles
24

Schönwald, Julian Ralf, Christian Forsteneichner, David Vahrenhorst, and Kristin Paetzold. "Improvement of Collaboration between Testing and Simulation Departments on the Example of a Motorcycle Manufacturer." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 149–58. http://dx.doi.org/10.1017/dsi.2019.18.

Full text
Abstract:
AbstractIn testing and simulation departments in product development (PD) data types, data structures and data storage are often very different. Exchange of data and information is normally not automated and often not supported by management systems. This can lead to loss of time and information. A literature study in combination with 20 expert interviews and the analysis of documents as well as data storage structures and IT systems in a PD department of a motorcycle manufacturer were performed. Test and simulation processes were classified and standardized, documentation formats analyzed, standards in Test Data Management (TDM) and Simulation Data Management (SDM) as well as verification and validation processes compared. IT support in SDM is better than in TDM. An integration of TDM and SDM could lead to improved collaboration between testing and simulation departments. Options for this integration could be specific ontologies, object-oriented interfaces, a higher-level intermediate application, use of a common standard or integration of one standard into another one.
APA, Harvard, Vancouver, ISO, and other styles
25

Bartels, N., M. Eilers, C. Pütz, and A. Meins-Becker. "IFC-based linking of the risk management process using a building data model." IOP Conference Series: Earth and Environmental Science 1101, no. 9 (November 1, 2022): 092001. http://dx.doi.org/10.1088/1755-1315/1101/9/092001.

Full text
Abstract:
Abstract A vital element in working with BIM are standardised exchange formats that enable the exchange of information from digital building models between different software solutions and project participants. In this context, the Industry Foundation Classes (IFC) defined in DIN EN ISO 16739 represent a central standard for implementing the open exchange of information. Although approaches for integrating risk management are already available in IFC, they do not sufficiently reflect the needs of the construction industry. In order to increase project quality through risk management and the universal application of the Building Information Modelling (BIM) method, it is essential to map the generally valid information on the risk management process in IFC. The following article thus presents starting points for the further integration of risk management in IFC. The aim is to link all relevant risk information in a digital building model through an analysis and the development of an approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Estrela, Vania V. "DICOM’s Standardization in Histo-Pathology." Medical Technologies Journal 4, no. 3 (December 7, 2020): 578–79. http://dx.doi.org/10.26415/2572-004x-vol4iss3p578-579.

Full text
Abstract:
Background: The Digital Imaging and Communications in Medicine (DICOM) standard helps to represent, store, and to exchange healthcare images associated with its data. DICOM develops over time and is continuously adapted to match the rigors of new clinical demands and technologies. An uphill battle in this regard is to conciliate new software programs with legacy systems. Methods: This work discusses the essential aspects of the standard and assesses its capabilities and limitations in a multisite, multivendor healthcare system aiming at Whole Slicing Image (WSI) procedures. Selected relevant DICOM attributes help to develop and organize WSI applications that extract and handle image data, integrated patient records, and metadata. DICOM must also interface with proprietary file formats, clinical metadata and from different laboratory information systems. Standard DICOM validation tools to measure encoding, storing, querying and retrieval of medical data can verify the generated DICOM files over the web. Results: This work investigates the current regulations and recommendations for the use of DICOM with WSI data. They rely mostly on the EU guidelines that help envision future needs and extensions based on new examination modalities like concurrent use of WSI with in-vitro imaging and 3D WSI. Conclusion: A DICOM file format and communication protocol for pathology has been defined. However, adoption by vendors and in the field is pending. DICOM allows efficient access and prompt availability of WSI data as well as associated metadata. By leveraging a wealth of existing infrastructure solutions, the use of DICOM facilitates enterprise integration and data exchange for digital pathology. In the future, the DICOM standard will have to address several issues due to the way samples are gathered and encompassing new imaging technologies.
APA, Harvard, Vancouver, ISO, and other styles
27

Artus, Mathias, Mohamed Said Helmy Alabassy, and Christian Koch. "A BIM Based Framework for Damage Segmentation, Modeling, and Visualization Using IFC." Applied Sciences 12, no. 6 (March 8, 2022): 2772. http://dx.doi.org/10.3390/app12062772.

Full text
Abstract:
Paper-based data acquisition and manual transfer between incompatible software or data formats during inspections of bridges, as done currently, are time-consuming, error-prone, cumbersome, and lead to information loss. A fully digitized workflow using open data formats would reduce data loss, efforts, and the costs of future inspections. On the one hand, existing studies proposed methods to automatize data acquisition and visualization for inspections. These studies lack an open standard to make the gathered data available for other processes. On the other hand, several studies discuss data structures for exchanging damage information among different stakeholders. However, those studies do not cover the process of automatic data acquisition and transfer. This study focuses on a framework that incorporates automatic damage data acquisition, transfer, and a damage information model for data exchange. This enables inspectors to use damage data for subsequent analyses and simulations. The proposed framework shows the potentials for a comprehensive damage information model and related (semi-)automatic data acquisition and processing.
APA, Harvard, Vancouver, ISO, and other styles
28

Amović, Mladen, Miro Govedarica, Aleksandra Radulović, and Ivana Janković. "Big Data in Smart City: Management Challenges." Applied Sciences 11, no. 10 (May 17, 2021): 4557. http://dx.doi.org/10.3390/app11104557.

Full text
Abstract:
Smart cities use digital technologies such as cloud computing, Internet of Things, or open data in order to overcome limitations of traditional representation and exchange of geospatial data. This concept ensures a significant increase in the use of data to establish new services that contribute to better sustainable development and monitoring of all phenomena that occur in urban areas. The use of the modern geoinformation technologies, such as sensors for collecting different geospatial and related data, requires adequate storage options for further data analysis. In this paper, we suggest the biG dAta sMart cIty maNagEment SyStem (GAMINESS) that is based on the Apache Spark big data framework. The model of the GAMINESS management system is based on the principles of the big data modeling, which differs greatly from standard databases. This approach provides the ability to store and manage huge amounts of structured, semi-structured, and unstructured data in real time. System performance is increasing to a higher level by using the process parallelization explained through the five V principles of the big data paradigm. The existing solutions based on the five V principles are focused only on the data visualization, not the data themselves. Such solutions are often limited by different storage mechanisms and by the ability to perform complex analyses on large amounts of data with expected performance. The GAMINESS management system overcomes these disadvantages by conversion of smart city data to a big data structure without limitations related to data formats or use standards. The suggested model contains two components: a geospatial component and a sensor component that are based on the CityGML and the SensorThings standards. The developed model has the ability to exchange data regardless of the used standard or the data format into proposed Apache Spark data framework schema. The verification of the proposed model is done within the case study for the part of the city of Novi Sad.
APA, Harvard, Vancouver, ISO, and other styles
29

Beier, Sebastian, Anne Fiebig, Cyril Pommier, Isuru Liyanage, Matthias Lange, Paul J. Kersey, Stephan Weise, et al. "Recommendations for the formatting of Variant Call Format (VCF) files to make plant genotyping data FAIR." F1000Research 11 (February 24, 2022): 231. http://dx.doi.org/10.12688/f1000research.109080.1.

Full text
Abstract:
In this opinion article, we discuss the formatting of files from (plant) genotyping studies, in particular the formatting of (meta-) data in Variant Call Format (VCF) files. The flexibility of the VCF format specification facilitates its use as a generic interchange format across domains but can lead to inconsistency between files in the presentation of metadata. To enable fully autonomous machine actionable data flow, generic elements need to be further specified. We strongly support the merits of the FAIR principles and see the need to facilitate them also through technical implementation specifications. VCF files are an established standard for the exchange and publication of genotyping data. Other data formats are also used to capture variant call data (for example, the HapMap format and the gVCF format), but none currently have the reach of VCF. In VCF, only the sites of variation are described, whereas in gVCF, all positions are listed, and confidence values are also provided. For the sake of simplicity, we will only discuss VCF and our recommendations for its use. However, the part of the VCF standard relating to metadata (as opposed to the actual variant calls) defines a syntactic format but no vocabulary, unique identifier or recommended content. In practice, often only sparse (if any) descriptive metadata is included. When descriptive metadata is provided, proprietary metadata fields are frequently added that have not been agreed upon within the community which may limit long-term and comprehensive interoperability. To address this, we propose recommendations for supplying and encoding metadata, focusing on use cases from the plant sciences. We expect there to be overlap, but also divergence, with the needs of other domains.
APA, Harvard, Vancouver, ISO, and other styles
30

Beamer, Ashley, and Mark Gillick. "ScotlandsPlaces XML: bespoke XML or XML mapping?" Program 44, no. 1 (February 16, 2010): 13–27. http://dx.doi.org/10.1108/00330331011019654.

Full text
Abstract:
PurposeThe purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross‐domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.Design/methodology/approachIn looking at the different heritage domain datasets as well as the metadata formats used for storage and data exchange, the ScotlandsPlaces XML format is revealed as the most appropriate for this type of project. The nature of the project itself and the need for dynamic web services are in turn explored.FindingsIt was found that, due to the nature of the project, the combination of a bespoke ScotlandsPlaces XML format and a set of matching web services was the best choice in terms of the retrieval of different domain datasets, as well as the desired extensible nature of the project.Research limitations/implicationsIt may have proven useful to investigate the datasets of more ScotlandsPlaces partners, but as yet only a limited number of first phase partners' datasets could be studied, as the second phase of the project has yet to begin.Originality/valueRather than an information portal, the ScotlandsPlaces web site aggregates disparate types of record, whether site records, archival or otherwise, into a single web site and makes these records discoverable via geographical searching. Aggregated data are accessed through web service queries (using a bespoke XML format developed specifically for the project for data return) and allow partner organisations to add their datasets regardless of the organisational domain. The service also allows spatially referenced records to be plotted on to a geo‐browser via a KML file, which in turn lets users evaluate the results based on geographical location.
APA, Harvard, Vancouver, ISO, and other styles
31

Pastusiak, Tadeusz. "Nautical electronic maps of S-411 standard and their suitability in navigation for assessment of ice cover condition of the Arctic Ocean." Polish Cartographical Review 48, no. 1 (March 1, 2016): 17–28. http://dx.doi.org/10.1515/pcr-2016-0002.

Full text
Abstract:
Abstract The research on the ice cover of waterways, rivers, lakes, seas and oceans by satellite remote sensing methods began at the end of the twentieth century. There was a lot of data sources in diverse file formats. It has not yet carried out a comparative assessment of their usefulness. A synthetic indicator of the quality of data sources binding maps resolution, file publication, time delay and the functionality for the user was developed in the research process. It reflects well a usefulness of maps and allows to compare them. Qualitative differences of map content have relatively little impact on the overall assessment of the data sources. Resolution of map is generally acceptable. Actuality has the greatest impact on the map content quality for the current vessel’s voyage planning in ice. The highest quality of all studied sources have the regional maps in GIF format issued by the NWS / NOAA, general maps of the Arctic Ocean in NetCDF format issued by the OSI SAF and the general maps of the Arctic Ocean in GRIB-2 format issued by the NCEP / NOAA. Among them are maps containing information on the quality of presented parameter. The leader among the map containing all three of the basic characteristics of ice cover (ice concentration, ice thickness and ice floe size) are vector maps in GML format. They are the new standard of electronic vector maps for the navigation of ships in ice. Publishing of ice cover maps in the standard electronic map format S-411 for navigation of vessels in ice adopted by the International Hydrographic Organization is advisable in case is planned to launch commercial navigation on the lagoons, rivers and canals. The wide availability of and exchange of information on the state of ice cover on rivers, lakes, estuaries and bays, which are used exclusively for water sports, ice sports and ice fishing is possible using handheld mobile phones, smartphones and tablets.
APA, Harvard, Vancouver, ISO, and other styles
32

Lockett, Helen, Peter Bartholomew, and Julian Gallop. "The Management of Product Data in an Integrated Aircraft Analysis Environment." Journal of Computing and Information Science in Engineering 4, no. 4 (December 1, 2004): 359–64. http://dx.doi.org/10.1115/1.1806448.

Full text
Abstract:
Today’s industry spends billions of dollars each year on generating engineering data as part of the product development process. This data is stored in the diverse proprietary formats of software vendors. These change rapidly and, particularly in aerospace projects, the lifecycle of the applications software is so short that the data will be inaccessible over much of the life of the associated product. This paper describes the work conducted in an EU collaborative research program to develop a standards-based approach to managing product data. The project used the EXPRESS language, as defined by Part 11 of the ISO10303 STEP standard, for data modeling 1. A principal focus of the project was the capture and exchange of product and analysis data in a multi-disciplinary optimization environment. Two separate approaches were investigated—first using a specialized EXPRESS schema that was developed specifically to meet the project requirements and, second, a generic approach based upon the STEP PDM Schema.
APA, Harvard, Vancouver, ISO, and other styles
33

Beil, Christof, Roland Ruhdorfer, Theresa Coduro, and Thomas H. Kolbe. "Detailed Streetspace Modelling for Multiple Applications: Discussions on the Proposed CityGML 3.0 Transportation Model." ISPRS International Journal of Geo-Information 9, no. 10 (October 13, 2020): 603. http://dx.doi.org/10.3390/ijgi9100603.

Full text
Abstract:
In the context of smart cities and digital twins, three-dimensional semantic city models are increasingly used for the analyses of large urban areas. While the representation of buildings, terrain, and vegetation has become standard for most city models, detailed spatio-semantic representations of streetspace have played a minor role so far. This is now changing (1) because of data availability, and (2) because recent and emerging applications require having detailed data about the streetspace. The upcoming version 3.0 of the international standard CityGML provides a substantially updated data model regarding the transportation infrastructure, including the representation of the streetspace. However, there already exist a number of other standards and data formats dealing with the representation and exchange of streetspace data. Thus, based on an extensive literature review of potential applications as well as discussions and collaborations with relevant stakeholders, seven key modelling aspects of detailed streetspace models are identified. This allows a structured discussion of representational capabilities of the proposed CityGML3.0 Transportation Model with respect to these aspects and in comparison to the other standards. Subsequently, it is shown that CityGML3.0 meets most of these aspects and that streetspace models can be derived from various data sources and for different cities. Models generated compliant to the CityGML standard are immediately usable for a number of applications. This is demonstrated for some applications, such as land use management, solar potential analyses, and traffic and pedestrian simulations.
APA, Harvard, Vancouver, ISO, and other styles
34

Yamada, Issaku, Matthew P. Campbell, Nathan Edwards, Leyla Jael Castro, Frederique Lisacek, Julien Mariethoz, Tamiko Ono, Rene Ranzinger, Daisuke Shinmachi, and Kiyoko F. Aoki-Kinoshita. "The glycoconjugate ontology (GlycoCoO) for standardizing the annotation of glycoconjugate data and its application." Glycobiology 31, no. 7 (February 23, 2021): 741–50. http://dx.doi.org/10.1093/glycob/cwab013.

Full text
Abstract:
Abstract Recent years have seen great advances in the development of glycoproteomics protocols and methods resulting in a sustainable increase in the reporting proteins, their attached glycans and glycosylation sites. However, only very few of these reports find their way into databases or data repositories. One of the major reasons is the absence of digital standard to represent glycoproteins and the challenging annotations with glycans. Depending on the experimental method, such a standard must be able to represent glycans as complete structures or as compositions, store not just single glycans but also represent glycoforms on a specific glycosylation side, deal with partially missing site information if no site mapping was performed, and store abundances or ratios of glycans within a glycoform of a specific site. To support the above, we have developed the GlycoConjugate Ontology (GlycoCoO) as a standard semantic framework to describe and represent glycoproteomics data. GlycoCoO can be used to represent glycoproteomics data in triplestores and can serve as a basis for data exchange formats. The ontology, database providers and supporting documentation are available online (https://github.com/glycoinfo/GlycoCoO).
APA, Harvard, Vancouver, ISO, and other styles
35

Greulich, Leonard, Stefan Hegselmann, and Martin Dugas. "An Open-Source, Standard-Compliant, and Mobile Electronic Data Capture System for Medical Research (OpenEDC): Design and Evaluation Study." JMIR Medical Informatics 9, no. 11 (November 19, 2021): e29176. http://dx.doi.org/10.2196/29176.

Full text
Abstract:
Background Medical research and machine learning for health care depend on high-quality data. Electronic data capture (EDC) systems have been widely adopted for metadata-driven digital data collection. However, many systems use proprietary and incompatible formats that inhibit clinical data exchange and metadata reuse. In addition, the configuration and financial requirements of typical EDC systems frequently prevent small-scale studies from benefiting from their inherent advantages. Objective The aim of this study is to develop and publish an open-source EDC system that addresses these issues. We aim to plan a system that is applicable to a wide range of research projects. Methods We conducted a literature-based requirements analysis to identify the academic and regulatory demands for digital data collection. After designing and implementing OpenEDC, we performed a usability evaluation to obtain feedback from users. Results We identified 20 frequently stated requirements for EDC. According to the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 25010 norm, we categorized the requirements into functional suitability, availability, compatibility, usability, and security. We developed OpenEDC based on the regulatory-compliant Clinical Data Interchange Standards Consortium Operational Data Model (CDISC ODM) standard. Mobile device support enables the collection of patient-reported outcomes. OpenEDC is publicly available and released under the MIT open-source license. Conclusions Adopting an established standard without modifications supports metadata reuse and clinical data exchange, but it limits item layouts. OpenEDC is a stand-alone web app that can be used without a setup or configuration. This should foster compatibility between medical research and open science. OpenEDC is targeted at observational and translational research studies by clinicians.
APA, Harvard, Vancouver, ISO, and other styles
36

Gesquière, Gilles, and Alexis Manin. "3D Visualization of Urban Data Based on CityGML with WebGL." International Journal of 3-D Information Modeling 1, no. 3 (July 2012): 1–15. http://dx.doi.org/10.4018/ij3dim.2012070101.

Full text
Abstract:
Due to the advances in computer graphics and improved network speed it is now possible to navigate in 3D virtual world in real time. Until now, technologies employed require to install standalone application or plugins on navigators. The arrival of HTML 5 brings news solutions to visualize 3D data in a browser with WebGL. Several globe projects have proven that such technologies can be employed. Unfortunately, demonstrations are often based on proprietary formats to exchange or to store data. In this work, we propose to use CityGML: a standard provided by the Open Geospatial Consortium. CityGML files are imported in our Environment Editor. With several tools that we present in this paper, data are processed and stored. A client server application is also presented to permit the visualization of geometry and semantic in a navigator.
APA, Harvard, Vancouver, ISO, and other styles
37

HOOBI, Mays M. "SURVEY: EFFICIENT HYBRID ALGOR ITHMS OF CRYPTOGRAPHY." MINAR International Journal of Applied Sciences and Technology 2, no. 4 (December 1, 2020): 1–16. http://dx.doi.org/10.47832/2717-8234.4-2.1.

Full text
Abstract:
Day after day, the digital data sizes undergo rapid increases over Internet, it is significant; the data shouldn’t be accessed by the unauthorized users. The attackers attempt at accessing those sensitive part of the data. There is a necessity for the prevention of the unauthorized access of the data and guarantee the secure data exchange. A variety of the cryptographic approaches have been used for the conversion of the secret data of the users into secure ciphertext formats. The cryptographic methods have been based on, private and public keys. The researchers have worked on the efficient and secure transmission of data and presented a variety of the cryptographic approaches. For the efficient and secure transmission of the data over networks, there is a necessity of using hybrid approaches of encryption. In this article, various encryption methods are reviewed such as Rijndael, Number Theory Research Unit, Data Encryption Standard, 3 Data Encryption Standard, Elliptic Curve Cryptography, Rivest–Shamir–Adleman, Optimal Asymmetric Encryption Padding, Diffie-Hellman, HiSea, Improved Caesar, Digital Signature, and Advance Encryption Standard. Keywords: Brute Force Attack, Cryptography, Digital Data, Hybrid Encryption, Search Space.
APA, Harvard, Vancouver, ISO, and other styles
38

Aliyu, Mansur, Onyia Franklin Nonso, Anas Abdullahi, Umar Sani, and Zahriya L. Hassan. "Secure document and image transmission through an encrypted network system." Dutse Journal of Pure and Applied Sciences 8, no. 3b (October 14, 2022): 1–14. http://dx.doi.org/10.4314/dujopas.v8i3b.1.

Full text
Abstract:
Data communication and networks as a field gives high priority to data and network security. Presently, internet security issues have become very critical in sharing information. Perhaps, there is need for robust techniques for the protection of data shared via unsecured network channels. Since cryptography is a technique for securing plain text messages such as encrypted key exchange and authentication but reveals when and where communication is taking place, while steganography hides the existence of data to be transmitted. Thus, there is need for a tool that combines these two techniques in a single data transmission. As such, this paper intends to develop an application that combines both features of cryptography and steganography techniques for secure communication. The application is a cross-platform tool that can effectively hide information in many different carrier file formats such as image, audio or digital video file. The study adopt the requirements of Advanced Encryption Standard (AES) and Least Significant Bit (LSB) algorithm in order to come up with best technique that is more robust in securing confidential documents and images. The developed App accepts different file formats when hiding plaintext, messages, and images. The App is recommended to government security agencies, financial institutions, e-commerce websites/app, educational bodies, and individuals in sending personal data online.
APA, Harvard, Vancouver, ISO, and other styles
39

Palamarchuk, Yu O., S. V. Ivanov, and I. G. Ruban. "The digitizing algorithm for precipitation in the atmosphere on the base of radar measurements." Ukrainian hydrometeorological journal, no. 18 (October 29, 2017): 40–47. http://dx.doi.org/10.31481/uhmj.18.2016.05.

Full text
Abstract:
There is an increasing demand for automated high-quality very-short-range forecasts and nowcasts of precipitation on small scales and at high update frequencies. Current prediction systems use different methods of determining precipitation such as area tracking, individual cell tracking and numerical models. All approaches are based on radar measurements. World-leading manufactories of meteorological radars and attendant visualization software are introduced in the paper. Advantages of the numerical modelling against inertial schemes designed on statistical characteristics of convective processes are outlined. On this way, radar data assimilation systems as a necessary part of numerical models are intensively developed. In response to it, the use of digital formats for processing of radar measurements in numerical algorithms became important. In the focus of this work is the developing of a unified code for digital processing of radar signals at the preprocessing, filtration, assimilation and numerical integration steps. The proposed code also includes thinning, screening or superobbing radar data before exploring them for the assimilation procedures. The informational model manages radar data flows in the metadata and binary array forms. The model constitutes an official second-generation European standard exchange format for weather radar datasets from different manufactories. Results of radar measurement processing are presented for both, the single radar and radar overlying network.
APA, Harvard, Vancouver, ISO, and other styles
40

Mai, Hao, and Pascal Audet. "QuakeLabeler: A Fast Seismic Data Set Creation and Annotation Toolbox for AI Applications." Seismological Research Letters 93, no. 2A (February 2, 2022): 997–1010. http://dx.doi.org/10.1785/0220210290.

Full text
Abstract:
Abstract The production and preparation of data sets are essential steps in machine learning (ML) applications. With the increasing volume and scale of available ML techniques in seismology, annotating seismograms or seismic features has become time consuming and tedious for many researchers. Furthermore, most methods train and validate on unique data subsets, which hampers independent performance evaluation and comparison. To address this problem, we have developed the software QuakeLabeler, an open-source Python package to customize, build, and manage earthquake training data sets, including processing and visualization. QuakeLabeler has tight pipeline functions, which include retrieving seismograms from multiple online data centers, querying online human-reviewed catalogs, signal processing, annotating (labeling), and analyzing data distribution. In addition, relevant statistical graphics and human-readable output files can be generated. Various file export formats are supported, such as Seismic Analysis Code (*.sac), mini Standard for Exchange of Earthquake Data (*.mseed), NumPy (*.npz), MATLAB (*.mat), and the Hierarchical Data Format version 5 (*.hdf5). This toolbox is packaged with an interactive command-line interface. Three alternative running modes (beginner, advanced, and benchmark) are implemented, intended to offer specific data set solutions for different types of applications, that is, quick-start recipes for simple ML solutions, advanced design for customized project training, and benchmark bulletins for model comparison.
APA, Harvard, Vancouver, ISO, and other styles
41

McGinley, Tim Pat, Thomas Vestergaard, Cheol-Ho Jeong, and Finnur Pind. "An OpenBIM workflow to support collaboration between Acoustic Engineers and Architects." Journal of Physics: Conference Series 2069, no. 1 (November 1, 2021): 012164. http://dx.doi.org/10.1088/1742-6596/2069/1/012164.

Full text
Abstract:
Abstract Architects require the insight of acoustic engineers to understand how to improve and/or optimize the acoustic performance of their buildings. Normally this is supported by the architect providing digital models of the design to the acoustic engineer for analysis in the acoustician’s disciplinary software, for instance Odeon. This current workflow suffers from the following challenges: (1) architects typically require feedback on architectural disciplinary models that have too much geometric information unnecessarily complicating the acoustic analysis process; (2) the acoustician then has to waste time simplifying that geometry, (3) finally, this extra work wastes money which could otherwise be spent on faster design iterations supported by frequent feedback between architects and acousticians early in the design process. This paper focuses on the architect / acoustician workflow, however similar challenges can be found in other disciplines. OpenBIM workflows provide opportunities to increase the standardization of processes and interfaces between disciplines by reducing the reliance on the proprietary discipline specific file formats and tools. This paper lays the foundation for an OpenBIM workflow to enable the acoustic engineer to provide near real time feedback on the acoustic performance of the architectural design. The proposed workflow investigates the use of the international standard IFC as a design format rather than simply an exchange format. The workflow is presented here with the intention that this will be further explored and developed by other researchers, architects and acousticians.
APA, Harvard, Vancouver, ISO, and other styles
42

Verhoturov, Alexey A., and Vyacheslav A. Melkiy. "GEOINFORMATION SUPPORT FOR FORECASTING FLOOD ZONES IN THE SOUTH OF SAKHALIN." Vestnik SSUGT (Siberian State University of Geosystems and Technologies) 26, no. 2 (2021): 115–26. http://dx.doi.org/10.33764/2411-1759-2021-26-2-115-126.

Full text
Abstract:
Modern systems of hydrometeorological monitoring, for the most part, widely use WEB and GIS technology tools. Territorial fragmentation divisions of the World Meteorological Organization (WMO), Roshydromet, Russian Academy of Sciences and other services and departments interested in obtaining data requires creation of unified information environment for exchange of heterogeneous information. Formation field for geospatial data has become possible with the availability of industrial design platforms with high performance, supporting standard data exchange formats suitable of system for building projectoin. The purpose of the study is to develop requirements for geoinformation sup-port of the system necessary for flood forecasting. Methods: GIS mapping, interpretation and analysis of remote sensing data of the Earth. When developing system for hydrological monitoring of rivers in the Southern Sakhalin, we used the experience of operating similar observational network in services of several European countries, as well as the geographically distributed GIS created by Roshydromet. Considering the vast experience of predecessors and requirements for geoinformation support neces-sary for predicting flood zones in the rivers of Southern Sakhalin have been developed. The initial da-ta for creating the correct flood model are satellite images, large-scale topographic maps, digital terrain models, data from long-term hydrometeorological observations, and engineering surveys.
APA, Harvard, Vancouver, ISO, and other styles
43

Nunn, Ceri, Yosio Nakamura, Sharon Kedar, and Mark P. Panning. "A New Archive of Apollo’s Lunar Seismic Data." Planetary Science Journal 3, no. 9 (September 1, 2022): 219. http://dx.doi.org/10.3847/psj/ac87af.

Full text
Abstract:
Abstract The Apollo astronauts deployed seismic experiments on the nearside of the Moon between 1969 and 1972. Five stations collected passive seismic data. Apollo 11 operated for around 20 days, and stations 12, 14, 15, and 16 operated nearly continuously from their installation until 1977. Seismic data were collected and digitized on the Moon and transmitted to Earth. The data were recorded on magnetic reel-to-reel tapes, with timestamps representing the signal reception time on Earth. The taped data have been widely used for many applications and have previously been shared in various formats. The data have slightly varying sampling rates, due to random fluctuations of the data sampler and also its sensitivity to the significant temperature variations on the Moon’s surface. Additionally, there were timing errors. Previously shared versions of the Apollo data were affected by these problems. We have reimported the passive data to SEED (Standard for the Exchange of Earthquake Data) format, and we make these data available via Incorporated Research Institutions for Seismology and the Planetary Data System. We have cleaned the timestamp series to reduce incorrectly recorded timestamps. The archive includes five tracks: three components of the mid-period seismometers, one short-period component, and a time track containing the timestamps. The seismic data are provided unprocessed in their raw format, and we provide instrument response files. We hope that the new archive will make it easier for a new generation of seismologists to use these data to learn more about the structure of the Moon.
APA, Harvard, Vancouver, ISO, and other styles
44

Alcina, Amparo. "Terminology standards and exchange formats." Tradumàtica: tecnologies de la traducció, no. 13 (December 31, 2015): 571. http://dx.doi.org/10.5565/rev/tradumatica.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ciccone, Angelo, Vittoria Ciotta, and Domenico Asprone. "INTEGRATION OF STRUCTURAL INFORMATION WITHIN A BIM-BASED ENVIRONMENT FOR SEISMIC STRUCTURAL E-PERMITS." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 29, no. 2 (February 10, 2023): 171–93. http://dx.doi.org/10.3846/jcem.2023.18460.

Full text
Abstract:
The assessment of the structural safety of buildings, with the related outcomes and other structural information, is typically reported in un-structured sets of documents (tables, drawings, reports, etc.). This happens even if Building Information Modelling (BIM) workflows, platforms, and standards are adopted. Generally, the BIM database provides input data for the structural design, but most of the data produced by structural designers, according to the structural codes, do not fully integrate into the BIM database along with other context-related information. These data are not easily recorded, especially in openBIM standard file formats such as Industry Foundation Classes (IFC). In the context of digital procedures for permit applications pertaining to seismic structural engineering, the authors propose an openBIM approach for the integration of structural information to support the activities of building authorities’ bodies (BABs). The proposed framework has led to the development of an Information Delivery Manual (IDM) and a Model View Definition (MVD), considering the IFC schema, for the integration and exchange of information within a BIM-based environment. Successively, the authors implemented the proposed IDM/MVD solution in a case study that provided an effective workflow for innovative future delivery of necessary information to building authorities to obtain seismic authorization permits.
APA, Harvard, Vancouver, ISO, and other styles
46

May, Gokan, Sangje Cho, AmirHossein Majidirad, and Dimitris Kiritsis. "A Semantic Model in the Context of Maintenance: A Predictive Maintenance Case Study." Applied Sciences 12, no. 12 (June 15, 2022): 6065. http://dx.doi.org/10.3390/app12126065.

Full text
Abstract:
Advanced technologies in modern industry collect massive volumes of data from a plethora of sources, such as processes, machines, components, and documents. This also applies to predictive maintenance. To provide access to these data in a standard and structured way, researchers and practitioners need to design and develop a semantic model of maintenance entities to build a reference ontology for maintenance. To date, there have been numerous studies combining the domain of predictive maintenance and ontology engineering. However, such earlier works, which focused on semantic interoperability to exchange data with standardized meanings, did not fully leverage the opportunities provided by data federation to elaborate these semantic technologies further. Therefore, in this paper, we fill this research gap by addressing interoperability in smart manufacturing and the issue of federating different data formats effectively by using semantic technologies in the context of maintenance. Furthermore, we introduce a semantic model in the form of an ontology for mapping relevant data. The proposed solution is validated and verified using an industrial implementation.
APA, Harvard, Vancouver, ISO, and other styles
47

Strickland, P. R., and B. McMahon. "Crystallographic publishing in the electronic age." Acta Crystallographica Section A Foundations of Crystallography 64, no. 1 (December 21, 2007): 38–51. http://dx.doi.org/10.1107/s0108767307045801.

Full text
Abstract:
The journals of the International Union of Crystallography have grown in size and number over the past 60 years to match developments in scientific practice and technique. High quality of publication has always been at the forefront of editorial policy and ways in which this has been achieved are described. In particular, the development of standard exchange and archive formats for crystallographic data has allowed the editorial office to conduct automated analyses of structural data supporting articles submitted for publication and these analyses assist the scientific editors in careful and critical peer review. The new information technologies of the Internet age have allowed the IUCr journals to flourish and to provide a wide range of powerful services to authors, editors and readers alike. The integration of literature and supporting structural data is of particular importance. The new technologies have also brought fresh economic and cultural challenges, and offer completely new opportunities to disseminate the results of scientific research. The journals continue to respond to these challenges and take advantage of new opportunities in innovative ways.
APA, Harvard, Vancouver, ISO, and other styles
48

Bhattacharjee, Mitali, Rajesh Raju, Aneesha Radhakrishnan, Vishalakshi Nanjappa, Babylakshmi Muthusamy, Kamlendra Singh, Dheebika Kuppusamy, et al. "A Bioinformatics Resource for TWEAK-Fn14 Signaling Pathway." Journal of Signal Transduction 2012 (May 9, 2012): 1–10. http://dx.doi.org/10.1155/2012/376470.

Full text
Abstract:
TNF-related weak inducer of apoptosis (TWEAK) is a new member of the TNF superfamily. It signals through TNFRSF12A, commonly known as Fn14. The TWEAK-Fn14 interaction regulates cellular activities including proliferation, migration, differentiation, apoptosis, angiogenesis, tissue remodeling and inflammation. Although TWEAK has been reported to be associated with autoimmune diseases, cancers, stroke, and kidney-related disorders, the downstream molecular events of TWEAK-Fn14 signaling are yet not available in any signaling pathway repository. In this paper, we manually compiled from the literature, in particular those reported in human systems, the downstream reactions stimulated by TWEAK-Fn14 interactions. Our manual amassment of the TWEAK-Fn14 pathway has resulted in cataloging of 46 proteins involved in various biochemical reactions and TWEAK-Fn14 induced expression of 28 genes. We have enabled the availability of data in various standard exchange formats from NetPath, a repository for signaling pathways. We believe that this composite molecular interaction pathway will enable identification of new signaling components in TWEAK signaling pathway. This in turn may lead to the identification of potential therapeutic targets in TWEAK-associated disorders.
APA, Harvard, Vancouver, ISO, and other styles
49

Schreiber, Falk, Gary D. Bader, Martin Golebiewski, Michael Hucka, Benjamin Kormeier, Nicolas Le Novère, Chris Myers, et al. "Specifications of Standards in Systems and Synthetic Biology." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 1–3. http://dx.doi.org/10.1515/jib-2015-258.

Full text
Abstract:
Summary Standards shape our everyday life. From nuts and bolts to electronic devices and technological processes, standardised products and processes are all around us. Standards have technological and economic benefits, such as making information exchange, production, and services more efficient. However, novel, innovative areas often either lack proper standards, or documents about standards in these areas are not available from a centralised platform or formal body (such as the International Standardisation Organisation).Systems and synthetic biology is a relatively novel area, and it is only in the last decade that the standardisation of data, information, and models related to systems and synthetic biology has become a community-wide effort. Several open standards have been established and are under continuous development as a community initiative. COMBINE, the ‘COmputational Modeling in BIology’ NEtwork [1] has been established as an umbrella initiative to coordinate and promote the development of the various community standards and formats for computational models. There are yearly two meeting, HARMONY (Hackathons on Resources for Modeling in Biology), Hackathon-type meetings with a focus on development of the support for standards, and COMBINE forums, workshop-style events with oral presentations, discussion, poster, and breakout sessions for further developing the standards. For more information see http://co.mbine.org/.So far the different standards were published and made accessible through the standards’ web-pages or preprint services. The aim of this special issue is to provide a single, easily accessible and citable platform for the publication of standards in systems and synthetic biology. This special issue is intended to serve as a central access point to standards and related initiatives in systems and synthetic biology, it will be published annually to provide an opportunity for standard development groups to communicate updated specifications.
APA, Harvard, Vancouver, ISO, and other styles
50

Palm, Julia, Frank A. Meineke, Jens Przybilla, and Thomas Peschel. "“fhircrackr”: An R Package Unlocking Fast Healthcare Interoperability Resources for Statistical Analysis." Applied Clinical Informatics 14, no. 01 (January 2023): 54–64. http://dx.doi.org/10.1055/s-0042-1760436.

Full text
Abstract:
Abstract Background The growing interest in the secondary use of electronic health record (EHR) data has increased the number of new data integration and data sharing infrastructures. The present work has been developed in the context of the German Medical Informatics Initiative, where 29 university hospitals agreed to the usage of the Health Level Seven Fast Healthcare Interoperability Resources (FHIR) standard for their newly established data integration centers. This standard is optimized to describe and exchange medical data but less suitable for standard statistical analysis which mostly requires tabular data formats. Objectives The objective of this work is to establish a tool that makes FHIR data accessible for standard statistical analysis by providing means to retrieve and transform data from a FHIR server. The tool should be implemented in a programming environment known to most data analysts and offer functions with variable degrees of flexibility and automation catering to users with different levels of FHIR expertise. Methods We propose the fhircrackr framework, which allows downloading and flattening FHIR resources for data analysis. The framework supports different download and authentication protocols and gives the user full control over the data that is extracted from the FHIR resources and transformed into tables. We implemented it using the programming language R [1] and published it under the GPL-3 open source license. Results The framework was successfully applied to both publicly available test data and real-world data from several ongoing studies. While the processing of larger real-world data sets puts a considerable burden on computation time and memory consumption, those challenges can be attenuated with a number of suitable measures like parallelization and temporary storage mechanisms. Conclusion The fhircrackr R package provides an open source solution within an environment that is familiar to most data scientists and helps overcome the practical challenges that still hamper the usage of EHR data for research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography