Journal articles on the topic 'Data storage representation'

To see the other types of publications on this topic, follow the link: Data storage representation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data storage representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gelbard, Roy, and Israel Spiegler. "Representation and Storage of Motion Data." Journal of Database Management 13, no. 3 (July 2002): 46–63. http://dx.doi.org/10.4018/jdm.2002070104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gutsche, Oliver, and Igor Mandrichenko. "Striped Data Analysis Framework." EPJ Web of Conferences 245 (2020): 06042. http://dx.doi.org/10.1051/epjconf/202024506042.

Full text
Abstract:
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approach allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.
APA, Harvard, Vancouver, ISO, and other styles
3

Vakali, Athena, and Evimaria Terzi. "Multimedia data storage and representation issues on tertiary storage subsystems." ACM SIGOPS Operating Systems Review 35, no. 2 (April 2001): 61–77. http://dx.doi.org/10.1145/377069.377087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cimino, James J. "Data storage and knowledge representation for clinical workstations." International Journal of Bio-Medical Computing 34, no. 1-4 (January 1994): 185–94. http://dx.doi.org/10.1016/0020-7101(94)90021-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sheikhizadeh, Siavash, M. Eric Schranz, Mehmet Akdel, Dick de Ridder, and Sandra Smit. "PanTools: representation, storage and exploration of pan-genomic data." Bioinformatics 32, no. 17 (September 1, 2016): i487—i493. http://dx.doi.org/10.1093/bioinformatics/btw455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fischer, Felix, M. Alper Selver, Sinem Gezer, Oğuz Dicle, and Walter Hillen. "Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data." Journal of Medical and Biological Engineering 35, no. 6 (November 18, 2015): 709–23. http://dx.doi.org/10.1007/s40846-015-0097-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yuzhen, Jianming Lu, Jihong Guan, Mingying Fan, Ayman Haggag, and Takashi Yahagi. "GML Topology Data Storage Schema Design." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 701–8. http://dx.doi.org/10.20965/jaciii.2007.p0701.

Full text
Abstract:
Geography Markup Language (GML) was developed to standardize the representation of geographical data in extensible markup language (XML), which facilitates geographical information exchange and sharing. Increasing amounts of geographical data are being presented in GML as its use widens, raising the question of how to store GML data efficiently to facilitate its management and retrieval. We analyze topology data in GML and propose storing nonspatial and spatial data from GML documents in spatial databases (e.g, Oracle Spatial, DB2 Spatial, and PostGIS/PostgreSQL.). We then use an example to analyze the topology relation.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Sang Hun, and Kunwoo Lee. "Partial Entity Structure: A Compact Boundary Representation for Non-Manifold Geometric Modeling." Journal of Computing and Information Science in Engineering 1, no. 4 (November 1, 2001): 356–65. http://dx.doi.org/10.1115/1.1433486.

Full text
Abstract:
Non-manifold boundary representations have become very popular in recent years and various representation schemes have been proposed, as they represent a wider range of objects, for various applications, than conventional manifold representations. As these schemes mainly focus on describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy storage space redundantly, although they are very efficient in answering queries on topological adjacency relationships. To solve this problem, in this paper, we propose a compact as well as fast non-manifold boundary representation, called the partial entity structure. This representation reduces the storage size to half that of the radial edge structure, which is one of the most popular and efficient of existing data structures, while allowing full topological adjacency relationships to be derived without loss of efficiency. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar, Randhir, and Rakesh Tripathi. "Data Provenance and Access Control Rules for Ownership Transfer Using Blockchain." International Journal of Information Security and Privacy 15, no. 2 (April 2021): 87–112. http://dx.doi.org/10.4018/ijisp.2021040105.

Full text
Abstract:
Provenance provides information about how data came to be in its present state. Recently, many critical applications are working with data provenance and provenance security. However, the main challenges in provenance-based applications are storage representation, provenance security, and centralized approach. In this paper, the authors propose a secure trading framework which is based on the techniques of blockchain that includes various features like decentralization, immutability, and integrity in order to solve the trust crisis in centralized provenance-based system. To overcome the storage representation of data provenance, they propose JavaScript object notation (JSON) structure. To improve the provenance security, they propose the access control language (ACL) rule. To implement the JSON structure and ACL rules, permissioned blockchain based tool “Hyperledger Composer” has been used. They demonstrate that their framework minimizes the execution time when the number of transaction increases in terms of storage representation of data provenance and security.
APA, Harvard, Vancouver, ISO, and other styles
10

Leng, Yonglin, Zhikui Chen, and Yueming Hu. "STLIS: A Scalable Two-Level Index Scheme for Big Data in IoT." Mobile Information Systems 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/5341797.

Full text
Abstract:
The rapid development of the Internet of Things causes the dramatic growth of data, which poses an important challenge on the storage and quick retrieval of big data. As an effective representation model, RDF receives the most attention. More and more storage and index schemes have been developed for RDF model. For the large-scale RDF data, most of them suffer from a large number of self-joins, high storage cost, and many intermediate results. In this paper, we propose a scalable two-level index scheme (STLIS) for RDF data. In the first level, we devise a compressed path template tree (CPTT) index based on S-tree to retrieve the candidate sets of full path. In the second level, we create a hierarchical edge index (HEI) and a node-predicate (NP) index to accelerate the match. Extensive experiments are executed on two representative RDF benchmarks and one real RDF dataset in IoT by comparison with three representative index schemes, that is, RDF-3X, Bitmat, and TripleBit. Results demonstrate that our proposed scheme can respond to the complex query in real time and save much storage space compared with RDF-3X and Bitmat.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Qing Guo. "A 3D Surface Data Model for Fast Visualization of 3DCM." Advanced Materials Research 594-597 (November 2012): 2351–55. http://dx.doi.org/10.4028/www.scientific.net/amr.594-597.2351.

Full text
Abstract:
3D data model is an indispensable component to any 3D GIS, and forms the basis of 3D spatial analysis and representation. At present, plenty of representative 3D data models are proposed. However, existing models neglect the display result and the consumption of storage space. Based on the analysis of existing 3D GIS data model, a 3D surface model is proposed for fast visualization in this paper, which is composed of node, segment and triangle. The data structure and formal representation of the proposed 3D surface model is developed to organize and store data of 3D model. Finally, an experiment is made to compare this 3D surface model with other 3D data model, and the result demonstrates that the 3D surface model proposed in this paper is superior to the existing data model in terms of data volume, moreover, it can acquire fast visualization speed.
APA, Harvard, Vancouver, ISO, and other styles
12

Goudarzi, M., M. Asghari, P. Boguslawski, and A. A. Rahman. "DUAL HALF EDGE DATA STRUCTURE IN DATABASE FOR BIG DATA IN GIS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-2/W2 (October 19, 2015): 41–45. http://dx.doi.org/10.5194/isprsannals-ii-2-w2-41-2015.

Full text
Abstract:
In GIS, different types of data structures have been proposed in order to represent 3D models and examining the relationship between spatial objects. The Dual Half-Edge (DHE) is a data structure that permits the simultaneous representation of the geometry and topology of models with a special focus on building interiors. In this paper, from the storage cost point of view, the G-Maps model is analyzed and compared with the DHE model, since they have some features in common and also G-Maps is used widely in GIS. The primary result shows that the DHE is more efficient than the G-Maps with regard to the storage cost.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Hai Tang. "A Compressed Representation of a Image." Applied Mechanics and Materials 241-244 (December 2012): 2769–72. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2769.

Full text
Abstract:
The image description space grayscale, color or texture changes, but the image storage and analysis are very complex. If it is a dynamic image, the amount of data is greater. Image compression for image storage and transmission are very necessary. In order to more save memory space and more be used as a compressed representation of a image, a image is mapped into a graph, we can use the labeling of graph to compress.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Tianyi, Lu Chen, Christian S. Jensen, and Torben Bach Pedersen. "TRACE." Proceedings of the VLDB Endowment 14, no. 7 (March 2021): 1175–87. http://dx.doi.org/10.14778/3450980.3450987.

Full text
Abstract:
The deployment of vehicle location services generates increasingly massive vehicle trajectory data, which incurs high storage and transmission costs. A range of studies target offline compression to reduce the storage cost. However, to enable online services such as real-time traffic monitoring, it is attractive to also reduce transmission costs by being able to compress streaming trajectories in real-time. Hence, we propose a framework called TRACE that enables compression, transmission, and querying of network-constrained streaming trajectories in a fully online fashion. We propose a compact two-stage representation of streaming trajectories: a speed-based representation removes redundant information, and a multiple-references based referential representation exploits subtrajectory similarities. In addition, the online referential representation is extended with reference selection, deletion and rewriting functions that further improve the compression performance. An efficient data transmission scheme is provided for achieving low transmission overhead. Finally, indexing and filtering techniques support efficient real-time range queries over compressed trajectories. Extensive experiments with real-life and synthetic datasets evaluate the different parts of TRACE, offering evidence that it is able to outperform the existing representative methods in terms of both compression ratio and transmission cost.
APA, Harvard, Vancouver, ISO, and other styles
15

Shahid, Arsalan, Thien-An Ngoc Nguyen, and M.-Tahar Kechadi. "Big Data Warehouse for Healthcare-Sensitive Data Applications." Sensors 21, no. 7 (March 28, 2021): 2353. http://dx.doi.org/10.3390/s21072353.

Full text
Abstract:
Obesity is a major public health problem worldwide, and the prevalence of childhood obesity is of particular concern. Effective interventions for preventing and treating childhood obesity aim to change behaviour and exposure at the individual, community, and societal levels. However, monitoring and evaluating such changes is very challenging. The EU Horizon 2020 project “Big Data against Childhood Obesity (BigO)” aims at gathering large-scale data from a large number of children using different sensor technologies to create comprehensive obesity prevalence models for data-driven predictions about specific policies on a community. It further provides real-time monitoring of the population responses, supported by meaningful real-time data analysis and visualisations. Since BigO involves monitoring and storing of personal data related to the behaviours of a potentially vulnerable population, the data representation, security, and access control are crucial. In this paper, we briefly present the BigO system architecture and focus on the necessary components of the system that deals with data access control, storage, anonymisation, and the corresponding interfaces with the rest of the system. We propose a three-layered data warehouse architecture: The back-end layer consists of a database management system for data collection, de-identification, and anonymisation of the original datasets. The role-based permissions and secured views are implemented in the access control layer. Lastly, the controller layer regulates the data access protocols for any data access and data analysis. We further present the data representation methods and the storage models considering the privacy and security mechanisms. The data privacy and security plans are devised based on the types of collected personal, the types of users, data storage, data transmission, and data analysis. We discuss in detail the challenges of privacy protection in this large distributed data-driven application and implement novel privacy-aware data analysis protocols to ensure that the proposed models guarantee the privacy and security of datasets. Finally, we present the BigO system architecture and its implementation that integrates privacy-aware protocols.
APA, Harvard, Vancouver, ISO, and other styles
16

Jones, Andrew, Jonathan Wastling, and Ela Hunt. "Proposal for a Standard Representation of Two-Dimensional Gel Electrophoresis Data." Comparative and Functional Genomics 4, no. 5 (2003): 492–501. http://dx.doi.org/10.1002/cfg.323.

Full text
Abstract:
The global analysis of proteins is now feasible due to improvements in techniques such as two-dimensional gel electrophoresis (2-DE), mass spectrometry, yeast two-hybrid systems and the development of bioinformatics applications. The experiments form the basis of proteomics, and present significant challenges in data analysis, storage and querying. We argue that a standard format for proteome data is required to enable the storage, exchange and subsequent re-analysis of large datasets. We describe the criteria that must be met for the development of a standard for proteomics. We have developed a model to represent data from 2-DE experiments, including difference gel electrophoresis along with image analysis and statistical analysis across multiple gels. This part of proteomics analysis is not represented in current proposals for proteomics standards. We are working with the Proteomics Standards Initiative to develop a model encompassing biological sample origin, experimental protocols, a number of separation techniques and mass spectrometry. The standard format will facilitate the development of central repositories of data, enabling results to be verified or re-analysed, and the correlation of results produced by different research groups using a variety of laboratory techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Mamoutova, Olga V., Svetlana V. Shirokova, Mikhail B. Uspenskij, and Aleksandra V. Loginova. "The ontology-based approach to data storage systems technical diagnostics." E3S Web of Conferences 91 (2019): 08018. http://dx.doi.org/10.1051/e3sconf/20199108018.

Full text
Abstract:
Monitoring and diagnosing the state of data storage systems, as well as assessing reliability and troubleshooting, require a formalized health model. A comparative analysis of existing knowledge representation methods has shown that an ontological approach is well suited for this task. This paper introduces a machine-represented data storage reliability ontology with an expert health model as baseline data. Classes of the ontology include the key terms of the reliability domain. Stated requirements for data interpretation tools allow further processing of the ontology-based knowledge base. Described ontology-based diagnostic systems have shown their applicability in the case of data storage systems in the construction industry.
APA, Harvard, Vancouver, ISO, and other styles
18

Frenkel, Michael, Robert D. Chiroco, Vladimir Diky, Qian Dong, Kenneth N. Marsh, John H. Dymond, William A. Wakeham, Stephen E. Stein, Erich Königsberger, and Anthony R. H. Goodwin. "XML-based IUPAC standard for experimental, predicted, and critically evaluated thermodynamic property data storage and capture (ThermoML) (IUPAC Recommendations 2006)." Pure and Applied Chemistry 78, no. 3 (January 1, 2006): 541–612. http://dx.doi.org/10.1351/pac200678030541.

Full text
Abstract:
ThermoML is an Extensible Markup Language (XML)-based new IUPAC standard for storage and exchange of experimental, predicted, and critically evaluated thermophysical and thermochemical property data. The basic principles, scope, and description of all structural elements of ThermoML are discussed. ThermoML covers essentially all thermodynamic and transport property data (more than 120 properties) for pure compounds, multicomponent mixtures, and chemical reactions (including change-of-state and equilibrium reactions). Representations of all quantities related to the expression of uncertainty in ThermoML conform to the Guide to the Expression of Uncertainty in Measurement (GUM). The ThermoMLEquation schema for representation of fitted equations with ThermoML is also described and provided as supporting information together with specific formulations for several equations commonly used in the representation of thermodynamic and thermophysical properties. The role of ThermoML in global data communication processes is discussed. The text of a variety of data files (use cases) illustrating the ThermoML format for pure compounds, mixtures, and chemical reactions, as well as the complete ThermoML schema text, are provided as supporting information.
APA, Harvard, Vancouver, ISO, and other styles
19

Morell, Vicente, Miguel Cazorla, Sergio Orts-Escolano, and Jose Garcia-Rodriguez. "3D Maps Representation Using GNG." Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/972304.

Full text
Abstract:
Current RGB-D sensors provide a big amount of valuable information for mobile robotics tasks like 3D map reconstruction, but the storage and processing of the incremental data provided by the different sensors through time quickly become unmanageable. In this work, we focus on 3D maps representation and propose the use of the Growing Neural Gas (GNG) network as a model to represent 3D input data. GNG method is able to represent the input data with a desired amount of neurons or resolution while preserving the topology of the input space. Experiments show how GNG method yields a better input space adaptation than other state-of-the-art 3D map representation methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Ying, and Baotian Dong. "The Algebraic Operations and Their Implementation Based on a Two-Layer Cloud Data Model." Cybernetics and Information Technologies 16, no. 6 (December 1, 2016): 5–26. http://dx.doi.org/10.1515/cait-2016-0074.

Full text
Abstract:
Abstract The existing cloud data models cannot meet the management requirements of structured data very well including a great deal of relational data, therefore a two-layer cloud data model is proposed. The composite object is defined to model the nested data in the representation layer, while a 4-tuple is defined to model the non-nested data in the storage layer. Referring the relational algebra, the concept of SNO (Simple Nested Object) is defined as basic operational unit of the algebraic operations; the formal definitions of the algebraic operations consisting of the set operations and the query operations on the representation layer are proposed. The algorithm of extracting all SNOs from a CAO (Component-Attribute-Object) set of a composite object is proposed firstly as the foundation, and then as the idea; the pseudo code implementation of algorithms of the algebraic operations on the storage layer are proposed. Logic proof and example proof indicate that the definition and the algorithms of the algebraic operations are correct.
APA, Harvard, Vancouver, ISO, and other styles
21

Skauli, Torbjørn. "Sensor noise informed representation of hyperspectral data, with benefits for image storage and processing." Optics Express 19, no. 14 (June 22, 2011): 13031. http://dx.doi.org/10.1364/oe.19.013031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Marjit, Ujjal, Kumar Sharma, and Utpal Biswas. "Provenance Representation and Storage Techniques in Linked Data: A State-of-the-art Survey." International Journal of Computer Applications 38, no. 9 (January 28, 2012): 23–28. http://dx.doi.org/10.5120/4637-6889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Devarajan, Viji, and Revathy Subramanian. "Analyzing semantic similarity amongst textual documents to suggest near duplicates." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1703. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1703-1711.

Full text
Abstract:
<span>Data deduplication techniques removing repeated or redundant data from the storage. In recent days, more data has been generated and stored in the storage environment. More redundant and semantically similar content of the data occupied in the storage environment due to this storage efficiency will be reduced and cost of the storage will be high. To overcome this problem, we proposed a method hybrid bidirectional encoder representation from transformers for text semantics using graph convolutional network hybrid bidirectional encoder representation from transformers (BERT) model for text semantics (HBTSG) word embedding-based deep learning model to <span>identify near duplicates based on the semantic relationship between text documents. In this paper we hybridize the concepts of chunking and</span> <span>semantic analysis. The chunking process is carried out to split the documents into blocks. Next stage we identify the semantic relationship between</span> <span>documents using word embedding techniques. It combines the advantages of the chunking, feature extraction, and semantic relations to provide better results.</span></span>
APA, Harvard, Vancouver, ISO, and other styles
24

Chernyshev, Denys, Svitlana Tsiutsiura, Tamara Lyashchenko, Yuliia Luzina, and Vitalii Borodynia. "THE USE OF DATA WAREHOUSES ON THE EXAMPLE OF THE APPLICATION PROCESSING SYSTEM IN GOVERNMENT AGENCIES." Management of Development of Complex Systems, no. 44 (November 30, 2020): 166–74. http://dx.doi.org/10.32347/2412-9933.2020.44.166-174.

Full text
Abstract:
This article highlights the issue of the importance of information, the need to ensure its proper storage and use. Nowadays, the question of the importance of studying the functioning of data storage facilities in one of the most common design models – Data Flow Diagram, and data storage (DS or DW) in the whole. A study of abstract devices of this information security model was carried out: types, features and types of data storage, which are used in information systems (relational, multidimensional, and hybrid repositories), from the point of view of using models for presenting data; appearance of warehouses and building rules; Letter identifiers of storages “D”, “C”, “M” and “T” with the help of which the type of storages is determined; features of the numerical part of the identifiers for decompositions of the first and second tier processes; mechanisms that support data retention for their intermediate processing in information systems. Transition of properties and characteristics from physical to logical representation, rationalization of data warehouses by considering the features of the logical model. In the course of the work, the peculiarities of the construction of the DFD and the reflection of the interrelationships in all the component diagrams, defined in the general rules that are valid for the two, were considered. The highlighted issues concern the elements of the diagrams, which can be freely exploited within the borders of Ukraine, as well as the elemental content of the different types of diagrams in the diagrams. The unique functionality of domestic DFD diagrams allows to use a sufficient, albeit limited, elemental structure for construction. Depending on the actual possibilities, the diagrams of the Ukrainian manufacturer are somewhat simplified and imperfect according to modern technologies, but these factors do not diminish the importance and necessity of adequate data protection at the highest level. Therefore, in this work, the main most informative moments of the use of this or that type of storage, the demand for DS, in the field, are highlighted, possible advantages and disadvantages of using physical data storage facilities and features of virtual operation.
APA, Harvard, Vancouver, ISO, and other styles
25

Stoll, Thomas. "CorpusDB: Software for Analysis, Storage, and Manipulation of Sound Corpora." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 5 (June 30, 2021): 108–13. http://dx.doi.org/10.1609/aiide.v9i5.12655.

Full text
Abstract:
CorpusDB is a system for representing sound files and associated analysis metadata in a structured format. The formats and conventions used in conjunction with the database allow for representation of sound files and their processed variants; multiple, overlapping, hierarchical relationships between sound files and segments thereof; and connections between sounds, their transformations, and analysis metadata. The software described in this paper is a parallel implementation consisting of SuperCollider classes, Python classes, and a common data representation of corpora that allows for seamless sharing of data between the two complementary environments. Code examples and listings of multi-step algorithms are included that demonstrate the kinds of operations possible within this system.
APA, Harvard, Vancouver, ISO, and other styles
26

Egerton, R. F., D. S. Bright, S. D. Davilla, P. Ingram, E. J. Kirkland, M. Kundmann, C. E. Lyman, P. Rez, E. Steele, and N. J. Zaluzec. "Standard formats for the exchange and storage of image data." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 220–21. http://dx.doi.org/10.1017/s0424820100146941.

Full text
Abstract:
In microscopy, there is an increasing need for images to be recorded electronically and stored digitally on disk or tape. This image data can be shared by mailing these magnetic media or by electronic transmission along telephone lines (e.g. modem transfer) or special networks, such as Bitnet and Internet. In each case, the format in which the image is stored or transmitted must be known to the recipient in order to correctly recover all the information. Because there are many image formats to choose from, it would undoubtedly save misunderstanding and frustration if a group of individuals with similar interests and needs could agree upon a common format. The MSA Standards Committee has surveyed several formats which could be of particular interest to microscopists, with a view to making a recommendation to our community.Our chief concern has been compatibility with existing software, combined with an adequate representation of the data, compactness of data storage (on disk) and reasonable rate of data transfer.
APA, Harvard, Vancouver, ISO, and other styles
27

Baranowski, Z., L. Canali, R. Toebbicke, J. Hrivnac, and D. Barberis. "A study of data representation in Hadoop to optimize data storage and search performance for the ATLAS EventIndex." Journal of Physics: Conference Series 898 (October 2017): 062020. http://dx.doi.org/10.1088/1742-6596/898/6/062020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

K. Rahimunnisa. "Data Health Functionality using Hyperledger Fabric Technology." December 2022 4, no. 4 (January 5, 2023): 280–88. http://dx.doi.org/10.36548/jitdw.2022.4.003.

Full text
Abstract:
A permissioned blockchain infrastructure called Hyperledger Fabric offers a modular architecture, smart contract execution, configurable consensus, membership services, and a representation of roles between the nodes present in the infrastructure. It also provides high rated integrity sharing. Since patient health records are highly confidential, in order to maintain security, this study examines how to communicate medical data with better privacy protection in healthcare. The transfer of health-related data functions using Hyperledger framework improves the storage reliability and security. Furthermore, the healthcare supply chain process can be improved by hyperledger fabric networks by enhancing the visibility and traceability of network interactions. Companies that have accessibility to the ledger on a fabric network can see the same unchangeable data, enforcing responsibility and lowering the possibility of counterfeiting. This study's findings showcase the utilization of blockchain technology to improve the privacy of data sharing and storage in the healthcare sectors.
APA, Harvard, Vancouver, ISO, and other styles
29

Trautmann, Tina, Sujan Koirala, Nuno Carvalhais, Andreas Güntner, and Martin Jung. "The importance of vegetation in understanding terrestrial water storage variations." Hydrology and Earth System Sciences 26, no. 4 (February 24, 2022): 1089–109. http://dx.doi.org/10.5194/hess-26-1089-2022.

Full text
Abstract:
Abstract. So far, various studies have aimed at decomposing the integrated terrestrial water storage variations observed by satellite gravimetry (GRACE, GRACE-FO) with the help of large-scale hydrological models. While the results of the storage decomposition depend on model structure, little attention has been given to the impact of the way that vegetation is represented in these models. Although vegetation structure and activity represent the crucial link between water, carbon, and energy cycles, their representation in large-scale hydrological models remains a major source of uncertainty. At the same time, the increasing availability and quality of Earth-observation-based vegetation data provide valuable information with good prospects for improving model simulations and gaining better insights into the role of vegetation within the global water cycle. In this study, we use observation-based vegetation information such as vegetation indices and rooting depths for spatializing the parameters of a simple global hydrological model to define infiltration, root water uptake, and transpiration processes. The parameters are further constrained by considering observations of terrestrial water storage anomalies (TWS), soil moisture, evapotranspiration (ET) and gridded runoff (Q) estimates in a multi-criteria calibration approach. We assess the implications of including varying vegetation characteristics on the simulation results, with a particular focus on the partitioning between water storage components. To isolate the effect of vegetation, we compare a model experiment in which vegetation parameters vary in space and time to a baseline experiment in which all parameters are calibrated as static, globally uniform values. Both experiments show good overall performance, but explicitly including varying vegetation data leads to even better performance and more physically plausible parameter values. The largest improvements regarding TWS and ET are seen in supply-limited (semi-arid) regions and in the tropics, whereas Q simulations improve mainly in northern latitudes. While the total fluxes and storages are similar, accounting for vegetation substantially changes the contributions of different soil water storage components to the TWS variations. This suggests an important role of the representation of vegetation in hydrological models for interpreting TWS variations. Our simulations further indicate a major effect of deeper moisture storages and groundwater–soil moisture–vegetation interactions as a key to understanding TWS variations. We highlight the need for further observations to identify the adequate model structure rather than only model parameters for a reasonable representation and interpretation of vegetation–water interactions.
APA, Harvard, Vancouver, ISO, and other styles
30

De Masi, A. "DIGITAL DOCUMENTATION’S ONTOLOGY: CONTEMPORARY DIGITAL REPRESENTATIONS AS EXPRESS AND SHARED MODELS OF REGENERATION AND RESILIENCE IN THE PLATFORM BIM/CONTAMINATED HYBRID REPRESENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-1-2021 (August 28, 2021): 189–97. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-1-2021-189-2021.

Full text
Abstract:
Abstract. The study illustrates a university research project of “Digital Documentation’s Ontology”, to be activated with other universities, of an Platform (P) – Building Information Modeling (BIM) articulated on a Contaminated Hybrid Representation (diversification of graphic models); the latter, able to foresee categories of Multi-Representations that interact with each other for to favour several representations, adapted to a different information density in the digital multi-scale production, is intended as platform (grid of data and information at different scales, semantic structure from web content, data and information storage database, archive, model and form of knowledge and ontological representation shared) of: inclusive digital ecosystem development; digital regenerative synergies of representation with adaptable and resilient content in hybrid or semi-hybrid Cloud environments; phenomenological reading of the changing complexity of environmental reality; hub solution of knowledge and simulcast description of information of Cultural Heritage (CH); multimedia itineraries to enhance participatory and attractive processes for the community; factor of cohesion and sociality, an engine of local development. The methodology of P-BIM/CHR is articulated on the following ontologies: Interpretative and Codification, Morphology, Lexicon, Syntax, Metamorphosis, Metadata in the participatory system, Regeneration, Interaction and Sharing. From the point of view the results and conclusion the study allowed to highlight: a) Digital Regenerative synergies of representation; b) Smart CH Model for an interconnection of systems and services within a complex set of relationships.
APA, Harvard, Vancouver, ISO, and other styles
31

SOSA, ANNA VOGEL, and CAROL STOEL-GAMMON. "Patterns of intra-word phonological variability during the second year of life." Journal of Child Language 33, no. 1 (February 2006): 31–50. http://dx.doi.org/10.1017/s0305000905007166.

Full text
Abstract:
Phonological representation for adult speakers is generally assumed to include sub-lexical information at the level of the phoneme. Some have suggested, however, that young children operate with more holistic lexical representations. If young children use whole-word representation and adults employ phonemic representation, then a component of phonological development includes a transition from holistic to segmental storage of phonological information. The present study addresses the nature of this transition by investigating the prevalence and patterns of intra-word production variability during the first year of lexical acquisition (1;0–2;0). Longitudinal data from four typically developing children were analysed to determine variability at each age. Patterns of variability are discussed in relation to chronological age and productive vocabulary size. Results show high overall rates of variability, as well as a peak in variability corresponding to the onset of combinatorial speech, suggesting that phonological reorganization may commence somewhat later than previously thought.
APA, Harvard, Vancouver, ISO, and other styles
32

Olsen, J. V., and M. Mann. "Effective Representation and Storage of Mass Spectrometry-Based Proteomic Data Sets for the Scientific Community." Science Signaling 4, no. 160 (February 8, 2011): pe7. http://dx.doi.org/10.1126/scisignal.2001839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, C., Y. Peng, J. Chu, C. A. Shoemaker, and A. Zhang. "Integrated hydrological modelling of small- and medium-sized water storages with application to the upper Fengman Reservoir Basin of China." Hydrology and Earth System Sciences 16, no. 11 (November 6, 2012): 4033–47. http://dx.doi.org/10.5194/hess-16-4033-2012.

Full text
Abstract:
Abstract. Hydrological simulation in regions with a large number of water storages is difficult due to inaccurate water storage data. To address this issue, this paper presents an improved version of SWAT2005 (Soil and Water Assessment Tool, version 2005) using Landsat, a satellite-based dataset, an empirical storage classification method and some empirical relationships to estimate water storage and release from the various sizes of flow detention and regulation facilities. The SWAT2005 is enhanced by three features: (1) a realistic representation of the relationships between the surface area and volume of each type of water storages, ranging from small-sized flow detention ponds to medium- and large-sized reservoirs with the various flow regulation functions; (2) water balance and transport through a network combining both sequential and parallel streams and storage links; and (3) calibrations for both physical and human interference parameters. Through a real-world watershed case study, it is found that the improved SWAT2005 more accurately models small- and medium-sized storages than the original model in reproducing streamflows in the watershed. The improved SWAT2005 can be an effective tool to assess the impact of water storage on hydrologic processes, which has not been well addressed in the current modelling exercises.
APA, Harvard, Vancouver, ISO, and other styles
34

Frenkel, Michael, Robert D. Chirico, Vladimir Diky, Paul L. Brown, John H. Dymond, Robert N. Goldberg, Anthony R. H. Goodwin, et al. "Extension of ThermoML: The IUPAC standard for thermodynamic data communications (IUPAC Recommendations 2011)." Pure and Applied Chemistry 83, no. 10 (September 7, 2011): 1937–69. http://dx.doi.org/10.1351/pac-rec-11-05-01.

Full text
Abstract:
ThermoML is an XML-based approach for storage and exchange of experimental, predicted, and critically evaluated thermophysical and thermochemical property data. Extensions to the ThermoML schema for the representation of speciation, complex equilibria, and properties of biomaterials are described. The texts of 14 data files illustrating the new extensions are provided as Supplementary Information together with the complete text of the updated ThermoML schema.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, D., A. Bolat, and K. J. Li. "INDOOR SPATIAL DATA CONSTRUCTION FROM TRIANGLE MESH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W8 (July 11, 2018): 101–8. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w8-101-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> The 3D triangle mesh is widely used to represent indoor space. One of widely used methods of generating 3D triangle mesh data of indoor space is the construction from the point cloud collected using LIDAR. However, there are many problems in using generated triangle mesh data as a geometric representation of the indoor space. First, the number of triangles forming the triangle mesh is very large, which results in a bottleneck of the performance for storage and management. Second, no consideration on the properties of indoor space has been done by the previous work on mesh simplification for indoor geometric representation. Third, there is no research to construct indoor spatial standard data from triangle mesh data. For resolving these problems, we propose the a method for generating triangular mesh data for indoor geometric representation based in the observations mentioned above. First this method removes unnecessary objects and reduces the number of surfaces from the original fine-grained triangular mesh data using the properties of indoor space. Second, it also produces indoor geometric data in IndoorGML &amp;ndash; an OGC standard for indoor spatial data model. In experimental studies, we present a case study of indoor triangle mesh data from real world and compare results with raw data.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Hauenstein, Jacob. "Compact Preservation of Scrambled CD-Rom Data." International Journal of Computer Science and Information Technology 14, no. 4 (August 31, 2022): 1–11. http://dx.doi.org/10.5121/ijcsit.2022.14401.

Full text
Abstract:
When preserving CD-ROM discs, data sectors are often read in a so-called “scrambled mode” in order to preserve as much data as possible. This scrambled data is later unscrambled and further processed into a standard CD-ROM disc image. The process of converting the scrambled data into a standard CD-ROM disc image is potentially lossy, but standard CD-ROM disc images exhibit much higher software compatibility and have greater usability compared to the scrambled data from which they are derived. Consequently, for preservation purposes, it is often necessary to store both the scrambled data and the corresponding standard disc image, resulting in greatly increased storage demands compared to storing just one or the other. Here, a method that enables compact storage of scrambled data alongside the corresponding (unscrambled) standard CD-ROM disc image is introduced. The method produces a compact representation of the scrambled data that is derived from the unscrambled disc image. The method allows for (1) storage of the standard unscrambled disc image in unmodified form, (2) easy reconstruction of the scrambled data as needed, and (3) a substantial space savings (in the typical case) compared to storing the scrambled data using standard data compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
37

He, Zhenwen, Chunfeng Zhang, Xiaogang Ma, and Gang Liu. "Hexadecimal Aggregate Approximation Representation and Classification of Time Series Data." Algorithms 14, no. 12 (December 2, 2021): 353. http://dx.doi.org/10.3390/a14120353.

Full text
Abstract:
Time series data are widely found in finance, health, environmental, social, mobile and other fields. A large amount of time series data has been produced due to the general use of smartphones, various sensors, RFID and other internet devices. How a time series is represented is key to the efficient and effective storage and management of time series data, as well as being very important to time series classification. Two new time series representation methods, Hexadecimal Aggregate approXimation (HAX) and Point Aggregate approXimation (PAX), are proposed in this paper. The two methods represent each segment of a time series as a transformable interval object (TIO). Then, each TIO is mapped to a spatial point located on a two-dimensional plane. Finally, the HAX maps each point to a hexadecimal digit so that a time series is converted into a hex string. The experimental results show that HAX has higher classification accuracy than Symbolic Aggregate approXimation (SAX) but a lower one than some SAX variants (SAX-TD, SAX-BD). The HAX has the same space cost as SAX but is lower than these variants. The PAX has higher classification accuracy than HAX and is extremely close to the Euclidean distance (ED) measurement; however, the space cost of PAX is generally much lower than the space cost of ED. HAX and PAX are general representation methods that can also support geoscience time series clustering, indexing and query except for classification.
APA, Harvard, Vancouver, ISO, and other styles
38

He, Jie, Yun Ping Zheng, and Hui Guo. "A Novel Gray Image Representation Method Based on NAM Using Nonoverlapping Square Subpatterns." Applied Mechanics and Materials 143-144 (December 2011): 760–64. http://dx.doi.org/10.4028/www.scientific.net/amm.143-144.760.

Full text
Abstract:
In this paper, we propose a novel gray image representation method based on the non-symmetry and anti-packing model (NAM) by using the nonoverlapping square subpatterns, which is called the square NAM for gray images (SNAMG) representation method. Also, a SNAMG representation algorithm is put forward and the storage structures, the total data amount, and the time complexity of the proposed algorithm are analyzed in detail. By taking some standard gray images, such as ‘F16’ and ‘Peppers’, as the typical test objects, and comparing the proposed algorithm with those of the triangle NAM for gray images (TNAMG) and the classic linear quadtree (LQT), the theoretical and experimental results show that the former is obviously superior to the latter with respect to the numbers of subpatterns (nodes) and the data storage, and therefore it is a better method to represent the gray image pattern.
APA, Harvard, Vancouver, ISO, and other styles
39

Zheng, Lei, Zhiyuan Feng, and Kan Wang. "ON-THE-FLY INTERPOLATION OF CONTINUOUS TEMPERATURE-DEPENDENT THERMAL NEUTRON SCATTERING DATA IN RMC CODE." EPJ Web of Conferences 247 (2021): 09012. http://dx.doi.org/10.1051/epjconf/202124709012.

Full text
Abstract:
Thermal neutron scattering data have an important influence on the high-fidelity neutronics calculation of thermal reactors. Due to the limited storage capabilities of computers, a discrete ACE representation of the secondary neutron energy and angular distribution has been used for Monte Carlo calculation since the early 1980s. The use of this discrete representation does not produce noticeable effects in the integral calculations such as keff eigenvalues, but can produce noticeable deficiencies for differential calculations. A new continuous representation of the thermal neutron scattering data was created in 2006, but was not widely known. Recently, the continuous representation of the thermal neutron scattering ACE data based on ENDF/B-Ⅷ.0 library was officially released and was available for all users. The new representation shows great difference compared with the discrete one. In order to utilize the more physical and rigorous representation data for high fidelity neutronic-thermohydraulic coupling calculation, the on-the-fly treatment capability was proposed and implemented in RMC code. The two-dimensional linear-linear interpolation method was used to calculate the inelastic scattering cross sections and the secondary neutron energies and angles. The on-the-fly treatment capability was tested by a pressurized water reactor assembly. Results show that the on-the-fly treatment capability has high accuracy, and can be used to consider the temperature feedback in the neutronic-thermohydraulic coupling calculations. However, the efficiency of the on-the-fly treatment still need to be improved in the near future.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, C., Y. Peng, J. Chu, and C. A. Shoemaker. "Integrated hydrological modelling of small- and medium-sized water storages with application to the upper Fengman Reservoir Basin of China." Hydrology and Earth System Sciences Discussions 9, no. 3 (March 28, 2012): 4001–43. http://dx.doi.org/10.5194/hessd-9-4001-2012.

Full text
Abstract:
Abstract. Hydrological simulation in regions with a large number of water storages is difficult due to the inaccurate water storage data, including both topologic parameters and operational rules. To address this issue, this paper presents an improved version of SWAT2005 (Soil and Water Assessment Tool, version 2005) using the satellite-based dataset Landsat, an empirical storage classification method, and some empirical relationships to estimate water storage and release from the various levels of flow regulation facilities. The improved SWAT2005 is characterised by three features: (1) a realistic representation of the relationships between the water surface area and volume of each type of water storage, ranging from small-sized ponds for water flow regulation to large-sized and medium-sized reservoirs for water supply and hydropower generation; (2) water balance and transport through a network combining both sequential and parallel streams and storage links; and (3) calibrations for the physical parameters and the human interference parameters. Both the original and improved SWAT2005 are applied to the upper Fengman Reservoir Basin, and the results of these applications are compared. The improved SWAT2005 accurately models small- and medium-sized storages, indicating a significantly improved performance from that of the original model in reproducing streamflows.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Fengying, Enyi Yang, Anqiao Ma, and Rongsheng Dong. "Optimal Representation of Large-Scale Graph Data Based on Grid Clustering and K2-Tree." Mathematical Problems in Engineering 2020 (January 22, 2020): 1–8. http://dx.doi.org/10.1155/2020/2354875.

Full text
Abstract:
The application of appropriate graph data compression technology to store and manipulate graph data with tens of thousands of nodes and edges is a prerequisite for analyzing large-scale graph data. The traditional K2-tree representation scheme mechanically partitions the adjacency matrix, which causes the dense interval to be split, resulting in additional storage overhead. As the size of the graph data increases, the query time of K2-tree continues to increase. In view of the above problems, we propose a compact representation scheme for graph data based on grid clustering and K2-tree. Firstly, we divide the adjacency matrix into several grids of the same size. Then, we continuously filter and merge these grids until grid density satisfies the given density threshold. Finally, for each large grid that meets the density, K2-tree compact representation is performed. On this basis, we further give the relevant node neighbor query algorithm. The experimental results show that compared with the current best K2-BDC algorithm, our scheme can achieve better time/space tradeoff.
APA, Harvard, Vancouver, ISO, and other styles
42

Buhmann, Joachim, and Hans Kühnel. "Complexity Optimized Data Clustering by Competitive Neural Networks." Neural Computation 5, no. 1 (January 1993): 75–88. http://dx.doi.org/10.1162/neco.1993.5.1.75.

Full text
Abstract:
Data clustering is a complex optimization problem with applications ranging from vision and speech processing to data transmission and data storage in technical as well as in biological systems. We discuss a clustering strategy that explicitly reflects the tradeoff between simplicity and precision of a data representation. The resulting clustering algorithm jointly optimizes distortion errors and complexity costs. A maximum entropy estimation of the clustering cost function yields an optimal number of clusters, their positions, and their cluster probabilities. Our approach establishes a unifying framework for different clustering methods like K-means clustering, fuzzy clustering, entropy constrained vector quantization, or topological feature maps and competitive neural networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Bai, Jian Jun, and Chao De Yan. "Compression of Multi-Resolution Terrain Data Based on Binary Tree." Applied Mechanics and Materials 220-223 (November 2012): 2628–34. http://dx.doi.org/10.4028/www.scientific.net/amm.220-223.2628.

Full text
Abstract:
In order to overcome the incontinuity and redundancy of data storage caused by the grids in representing the multi-resolution terrain, an adaptive hierarchical triangulation based on binary tree is used to represent the multi-resolution terrain model. It realizes the representation of the terrain data at different levels of detail and enables data compression by local omission of data values. The structure of binary tree is stored using a bit code of the underlying binary tree, while the height data are stored using an array, and relative pointers which allow a selective tree traversal. This method realizes the continuity and reduces the data volume in data storage of multi-resolution digital elevation model (DEM), and it is possible to work directly on the compressed data. We show that significant compression rates can be obtained already for small threshold values, and in a visualization application, it is possible to extracted and drawn triangulations at interactive rates.
APA, Harvard, Vancouver, ISO, and other styles
44

Tucker-Drob, Elliot M., and Timothy A. Salthouse. "METHODS AND MEASURES: Confirmatory Factor Analysis and Multidimensional Scaling for Construct Validation of Cognitive Abilities." International Journal of Behavioral Development 33, no. 3 (February 25, 2009): 277–85. http://dx.doi.org/10.1177/0165025409104489.

Full text
Abstract:
Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data ( N = 8,813; ages 17—97 years) aggregated across 38 separate studies, MDS was applied to 16 cognitive variables representative of five well-established cognitive abilities. Parallel to confirmatory factor analytic solutions, and consistent with past MDS applications, the results for young (18—39 years), middle (40—65 years), and old (66—97 years) adult age groups consistently revealed a two-dimensional radex disk, with variables from fluid reasoning tests located at the center. Using a new method, target measures hypothesized to reflect three aspects of cognitive control ( updating, storage-plus-processing, and executive functioning) were projected onto the radex disk. Parallel to factor analytic results, these variables were also found to be centrally located in the cognitive ability space. The advantages and limitations of the radex representation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Meng , Kaitao, Deshi Li , Xiaofan He , Mingliu Liu , and Weitao Song . "Real-Time Compact Environment Representation for UAV Navigation." Sensors 20, no. 17 (September 2, 2020): 4976. http://dx.doi.org/10.3390/s20174976.

Full text
Abstract:
Recently, unmanned aerial vehicles (UAVs) have attracted much attention due to their on-demand deployment, high mobility, and low cost. For UAVs navigating in an unknown environment, efficient environment representation is needed due to the storage limitation of the UAVs. Nonetheless, building an accurate and compact environment representation model is highly non-trivial because of the unknown shape of the obstacles and the time-consuming operations such as finding and eliminating the environmental details. To overcome these challenges, a novel vertical strip extraction algorithm is proposed to analyze the probability density function characteristics of the normalized disparity value and segment the obstacles through an adaptive size sliding window. In addition, a plane adjustment algorithm is proposed to represent the obstacle surfaces as polygonal prism profiles while minimizing the redundant obstacle information. By combining these two proposed algorithms, the depth sensor data can be converted into the multi-layer polygonal prism models in real time. Besides, a drone platform equipped with a depth sensor is developed to build the compact environment representation models in the real world. Experimental results demonstrate that the proposed scheme achieves better performance in terms of precision and storage as compared to the baseline.
APA, Harvard, Vancouver, ISO, and other styles
46

Fenicia, F., H. H. G. Savenije, P. Matgen, and L. Pfister. "Is the groundwater reservoir linear? Learning from data in hydrological modelling." Hydrology and Earth System Sciences Discussions 2, no. 4 (August 30, 2005): 1717–55. http://dx.doi.org/10.5194/hessd-2-1717-2005.

Full text
Abstract:
Abstract. Although catchment behaviour during recession periods appears to be better identifiable than in other periods, the representation of hydrograph recession is often weak in hydrological simulations. Reason lies in the various sources of uncertainty that affect hydrological simulations, and in particular in the inherent uncertainty concerning model conceptualizations, when they are based on an a-priori representation of the natural system. When flawed conceptualizations combine with calibration strategies that favour an accurate representation of peak flows, model structural inadequacies manifest themselves in a biased representation of other aspects of the simulation, such as flow recession and low flows. In this paper we try to reach good model performance in low flow simulation and make use of a flexible model structure that can adapt to match the observed discharge behaviour during recession periods. Moreover, we adopt a step-wise calibration procedure where we try to avoid that the simulation of low flows is neglected in favour of other hydrograph characteristics. The model used is designed to reproduce specific hydrograph characteristics and is composed of four reservoirs: an interception reservoir, an unsaturated soil reservoir, a fast reacting reservoir, and a slow reacting reservoir. The slow reacting reservoir conceptualises the processes that lead to the generation of the slow hydrograph component, and is characterized by a storage-discharge relation that is not determined a-priori, but is derived from the observations following a ``top-down'' approach. The procedure used to determine this relation starts by calculating a synthetic master recession curve that represents the long-term recession of the catchment. Next, a calibration procedure follows to force the outflow from the slow reacting reservoir to match the master recession curve. Low flows and high flows related parameters are calibrated in separate stages because we consider them to be related to different processes, which can be identified separately. This way we avoid that the simulation of low discharges is neglected in favour of a higher performance in simulating peak discharges. We have applied this analysis to several catchments in Luxembourg, and in each case we have determined which form (linear or non linear) of the storage-discharge relationship best describes the slow reacting reservoir. We conclude that in all catchments except one (where human interference is high) a linear relation applies.
APA, Harvard, Vancouver, ISO, and other styles
47

Hartmann, Nikolai, Johannes Elmsheuser, and Günter Duckeck. "Columnar data analysis with ATLAS analysis formats." EPJ Web of Conferences 251 (2021): 03001. http://dx.doi.org/10.1051/epjconf/202125103001.

Full text
Abstract:
Future analysis of ATLAS data will involve new small-sized analysis formats to cope with the increased storage needs. The smallest of these, named DAOD_PHYSLITE, has calibrations already applied to allow fast downstream analysis and avoid the need for further analysis-specific intermediate formats. This allows for application of the “columnar analysis” paradigm where operations are applied on a per-array instead of a per-event basis. We will present methods to read the data into memory, using Uproot, and also discuss I/O aspects of columnar data and alternatives to the ROOT data format. Furthermore, we will show a representation of the event data model using the Awkward Array package and present proof of concept for a simple analysis application.
APA, Harvard, Vancouver, ISO, and other styles
48

Rugova, Dr Sc Ermir. "The use of intuitionistic fuzzy cube and operators in treating imprecision in data repositories." ILIRIA International Review 1, no. 1 (June 30, 2011): 117. http://dx.doi.org/10.21113/iir.v1i1.203.

Full text
Abstract:
Traditional data repositories introduced for the needs of business pro-cessing, typically focus on the storage and querying of crisp domains of data. As a result, current commercial data repositories have no facilities for either storing or querying imprecise/ approximate data.No significant attempt has been made for a generic and application- independent representation of value imprecision mainly as a property of axes of analysis and also as part of dynamic environment, where poten-tial users may wish to define their “own” axes of analysis for querying either precise or imprecise facts. In such cases, measured values and facts are characterised by descriptive values drawn from a number of dimen-sions, whereas values of a dimension are organised as hierarchical levels.In this paper, an extended multidimensional model named IF-Cube is put forward, which allows the representation of imprecision in facts and dimensions and answering of queries based on imprecise hierarchical preferences.
APA, Harvard, Vancouver, ISO, and other styles
49

Jackson, T. R., W. Cho, N. M. Patrikalakis, and E. M. Sachs. "Memory Analysis of Solid Model Representations for Heterogeneous Objects." Journal of Computing and Information Science in Engineering 2, no. 1 (March 1, 2002): 1–10. http://dx.doi.org/10.1115/1.1476380.

Full text
Abstract:
Methods to represent and exchange parts consisting of Functionally Graded Material (FGM) for Solid Freeform Fabrication (SFF) with Local Composition Control (LCC) are evaluated based on their memory requirements. Data structures for representing FGM objects as heterogeneous models are described and analyzed, including a voxel-based structure, finite-element mesh-based approach, and the extension of the Radial-Edge and Cell-Tuple-Graph data structures with Material Domains representing spatially varying composition properties. The storage cost for each data structure is derived in terms of the number of instances of each of its fundamental classes required to represent an FGM object. In order to determine the optimal data structure, the storage cost associated with each data structure is calculated for several hypothetical models. Limitations of these representation schemes are discussed and directions for future research also recommended.
APA, Harvard, Vancouver, ISO, and other styles
50

Hulsman, Petra, Hubert H. G. Savenije, and Markus Hrachowitz. "Learning from satellite observations: increased understanding of catchment processes through stepwise model improvement." Hydrology and Earth System Sciences 25, no. 2 (February 24, 2021): 957–82. http://dx.doi.org/10.5194/hess-25-957-2021.

Full text
Abstract:
Abstract. Satellite observations can provide valuable information for a better understanding of hydrological processes and thus serve as valuable tools for model structure development and improvement. While model calibration and evaluation have in recent years started to make increasing use of spatial, mostly remotely sensed information, model structural development largely remains to rely on discharge observations at basin outlets only. Due to the ill-posed inverse nature and the related equifinality issues in the modelling process, this frequently results in poor representations of the spatio-temporal heterogeneity of system-internal processes, in particular for large river basins. The objective of this study is thus to explore the value of remotely sensed, gridded data to improve our understanding of the processes underlying this heterogeneity and, as a consequence, their quantitative representation in models through a stepwise adaptation of model structures and parameters. For this purpose, a distributed, process-based hydrological model was developed for the study region, the poorly gauged Luangwa River basin. As a first step, this benchmark model was calibrated to discharge data only and, in a post-calibration evaluation procedure, tested for its ability to simultaneously reproduce (1) the basin-average temporal dynamics of remotely sensed evaporation and total water storage anomalies and (2) their temporally averaged spatial patterns. This allowed for the diagnosis of model structural deficiencies in reproducing these temporal dynamics and spatial patterns. Subsequently, the model structure was adapted in a stepwise procedure, testing five additional alternative process hypotheses that could potentially better describe the observed dynamics and pattern. These included, on the one hand, the addition and testing of alternative formulations of groundwater upwelling into wetlands as a function of the water storage and, on the other hand, alternative spatial discretizations of the groundwater reservoir. Similar to the benchmark, each alternative model hypothesis was, in a next step, calibrated to discharge only and tested against its ability to reproduce the observed spatio-temporal pattern in evaporation and water storage anomalies. In a final step, all models were re-calibrated to discharge, evaporation and water storage anomalies simultaneously. The results indicated that (1) the benchmark model (Model A) could reproduce the time series of observed discharge, basin-average evaporation and total water storage reasonably well. In contrast, it poorly represented time series of evaporation in wetland-dominated areas as well as the spatial pattern of evaporation and total water storage. (2) Stepwise adjustment of the model structure (Models B–F) suggested that Model F, allowing for upwelling groundwater from a distributed representation of the groundwater reservoir and (3) simultaneously calibrating the model with respect to multiple variables, i.e. discharge, evaporation and total water storage anomalies, provided the best representation of all these variables with respect to their temporal dynamics and spatial patterns, except for the basin-average temporal dynamics in the total water storage anomalies. It was shown that satellite-based evaporation and total water storage anomaly data are not only valuable for multi-criteria calibration, but can also play an important role in improving our understanding of hydrological processes through the diagnosis of model deficiencies and stepwise model structural improvement.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography