Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Data Storage Representations.

Статті в журналах з теми "Data Storage Representations"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Data Storage Representations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Gutsche, Oliver, and Igor Mandrichenko. "Striped Data Analysis Framework." EPJ Web of Conferences 245 (2020): 06042. http://dx.doi.org/10.1051/epjconf/202024506042.

Повний текст джерела
Анотація:
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approach allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lee, Sang Hun, and Kunwoo Lee. "Partial Entity Structure: A Compact Boundary Representation for Non-Manifold Geometric Modeling." Journal of Computing and Information Science in Engineering 1, no. 4 (November 1, 2001): 356–65. http://dx.doi.org/10.1115/1.1433486.

Повний текст джерела
Анотація:
Non-manifold boundary representations have become very popular in recent years and various representation schemes have been proposed, as they represent a wider range of objects, for various applications, than conventional manifold representations. As these schemes mainly focus on describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy storage space redundantly, although they are very efficient in answering queries on topological adjacency relationships. To solve this problem, in this paper, we propose a compact as well as fast non-manifold boundary representation, called the partial entity structure. This representation reduces the storage size to half that of the radial edge structure, which is one of the most popular and efficient of existing data structures, while allowing full topological adjacency relationships to be derived without loss of efficiency. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gertsiy, O. "COMPARATIVE ANALYSIS OF COMPACT METHODS REPRESENTATIONS OF GRAPHIC INFORMATION." Collection of scientific works of the State University of Infrastructure and Technologies series "Transport Systems and Technologies" 1, no. 37 (June 29, 2021): 130–43. http://dx.doi.org/10.32703/2617-9040-2021-37-13.

Повний текст джерела
Анотація:
The main characteristics of graphic information compression methods with losses and without losses (RLE, LZW, Huffman's method, DEFLATE, JBIG, JPEG, JPEG 2000, Lossless JPEG, fractal and Wawelet) are analyzed in the article. Effective transmission and storage of images in railway communication systems is an important task now. Because large images require large storage resources. This task has become very important in recent years, as the problems of information transmission by telecommunication channels of the transport infrastructure have become urgent. There is also a great need for video conferencing, where the task is to effectively compress video data - because the greater the amount of data, the greater the cost of transmitting information, respectively. Therefore, the use of image compression methods that reduce the file size is the solution to this task. The study highlights the advantages and disadvantages of compression methods. The comparative analysis the basic possibilities of compression methods of graphic information is carried out. The relevance lies in the efficient transfer and storage of graphical information, as big data requires large resources for storage. The practical significance lies in solving the problem of effectively reducing the data size by applying known compression methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jackson, T. R., W. Cho, N. M. Patrikalakis, and E. M. Sachs. "Memory Analysis of Solid Model Representations for Heterogeneous Objects." Journal of Computing and Information Science in Engineering 2, no. 1 (March 1, 2002): 1–10. http://dx.doi.org/10.1115/1.1476380.

Повний текст джерела
Анотація:
Methods to represent and exchange parts consisting of Functionally Graded Material (FGM) for Solid Freeform Fabrication (SFF) with Local Composition Control (LCC) are evaluated based on their memory requirements. Data structures for representing FGM objects as heterogeneous models are described and analyzed, including a voxel-based structure, finite-element mesh-based approach, and the extension of the Radial-Edge and Cell-Tuple-Graph data structures with Material Domains representing spatially varying composition properties. The storage cost for each data structure is derived in terms of the number of instances of each of its fundamental classes required to represent an FGM object. In order to determine the optimal data structure, the storage cost associated with each data structure is calculated for several hypothetical models. Limitations of these representation schemes are discussed and directions for future research also recommended.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Frenkel, Michael, Robert D. Chiroco, Vladimir Diky, Qian Dong, Kenneth N. Marsh, John H. Dymond, William A. Wakeham, Stephen E. Stein, Erich Königsberger, and Anthony R. H. Goodwin. "XML-based IUPAC standard for experimental, predicted, and critically evaluated thermodynamic property data storage and capture (ThermoML) (IUPAC Recommendations 2006)." Pure and Applied Chemistry 78, no. 3 (January 1, 2006): 541–612. http://dx.doi.org/10.1351/pac200678030541.

Повний текст джерела
Анотація:
ThermoML is an Extensible Markup Language (XML)-based new IUPAC standard for storage and exchange of experimental, predicted, and critically evaluated thermophysical and thermochemical property data. The basic principles, scope, and description of all structural elements of ThermoML are discussed. ThermoML covers essentially all thermodynamic and transport property data (more than 120 properties) for pure compounds, multicomponent mixtures, and chemical reactions (including change-of-state and equilibrium reactions). Representations of all quantities related to the expression of uncertainty in ThermoML conform to the Guide to the Expression of Uncertainty in Measurement (GUM). The ThermoMLEquation schema for representation of fitted equations with ThermoML is also described and provided as supporting information together with specific formulations for several equations commonly used in the representation of thermodynamic and thermophysical properties. The role of ThermoML in global data communication processes is discussed. The text of a variety of data files (use cases) illustrating the ThermoML format for pure compounds, mixtures, and chemical reactions, as well as the complete ThermoML schema text, are provided as supporting information.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

ALHONIEMI, ESA. "Simplified time series representations for efficient analysis of industrial process data." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 17, no. 2 (May 2003): 103–14. http://dx.doi.org/10.1017/s0890060403172010.

Повний текст джерела
Анотація:
The data storage capacities of modern process automation systems have grown rapidly. Nowadays, the systems are able to frequently carry out even hundreds of measurements in parallel and store them in databases. However, these data are still rarely used in the analysis of processes. In this article, preparation of the raw data for further analysis is considered using feature extraction from signals by piecewise linear modeling. Prior to modeling, a preprocessing phase that removes some artifacts from the data is suggested. Because optimal models are computationally infeasible, fast heuristic algorithms must be utilized. Outlines for the optimal and some fast heuristic algorithms with modifications required by the preprocessing are given. In order to illustrate utilization of the features, a process diagnostics framework is presented. Among a large number of signals, the procedure finds the ones that best explain the observed short-term fluctuations in one signal. In the experiments, the piecewise linear modeling algorithms are compared using a massive data set from an operational paper machine. The use of piecewise linear representations in the analysis of changes in one real process measurement signal is demonstrated.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hernaez, Mikel, Dmitri Pavlichin, Tsachy Weissman, and Idoia Ochoa. "Genomic Data Compression." Annual Review of Biomedical Data Science 2, no. 1 (July 20, 2019): 19–37. http://dx.doi.org/10.1146/annurev-biodatasci-072018-021229.

Повний текст джерела
Анотація:
Recently, there has been growing interest in genome sequencing, driven by advances in sequencing technology, in terms of both efficiency and affordability. These developments have allowed many to envision whole-genome sequencing as an invaluable tool for both personalized medical care and public health. As a result, increasingly large and ubiquitous genomic data sets are being generated. This poses a significant challenge for the storage and transmission of these data. Already, it is more expensive to store genomic data for a decade than it is to obtain the data in the first place. This situation calls for efficient representations of genomic information. In this review, we emphasize the need for designing specialized compressors tailored to genomic data and describe the main solutions already proposed. We also give general guidelines for storing these data and conclude with our thoughts on the future of genomic formats and compressors.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rachkovskij, Dmitri A., and Ernst M. Kussul. "Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning." Neural Computation 13, no. 2 (February 2001): 411–52. http://dx.doi.org/10.1162/089976601300014592.

Повний текст джерела
Анотація:
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's holographic reduced representations and Kanerva's binary spatter codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this article we consider procedures of the context-dependent thinning developed for representation of complex hierarchical items in the architecture of associative-projective neural networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, context-dependent thinning preserves the same low density (or sparseness) of the bound codevector for a varied number of component codevectors. Besides, a bound codevector is similar not only to another one with similar component codevectors (as in other schemes) but also to the component codevectors themselves. This allows the similarity of structures to be estimated by the overlap of their codevectors, without retrieval of the component codevectors. This also allows easy retrieval of the component codevectors. Examples of algorithmic and neural network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role filler and predicate arguments schemes, trees, and directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional artificial intelligence as well as to the localist and microfeature-based connectionist representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

De Masi, A. "DIGITAL DOCUMENTATION’S ONTOLOGY: CONTEMPORARY DIGITAL REPRESENTATIONS AS EXPRESS AND SHARED MODELS OF REGENERATION AND RESILIENCE IN THE PLATFORM BIM/CONTAMINATED HYBRID REPRESENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-1-2021 (August 28, 2021): 189–97. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-1-2021-189-2021.

Повний текст джерела
Анотація:
Abstract. The study illustrates a university research project of “Digital Documentation’s Ontology”, to be activated with other universities, of an Platform (P) – Building Information Modeling (BIM) articulated on a Contaminated Hybrid Representation (diversification of graphic models); the latter, able to foresee categories of Multi-Representations that interact with each other for to favour several representations, adapted to a different information density in the digital multi-scale production, is intended as platform (grid of data and information at different scales, semantic structure from web content, data and information storage database, archive, model and form of knowledge and ontological representation shared) of: inclusive digital ecosystem development; digital regenerative synergies of representation with adaptable and resilient content in hybrid or semi-hybrid Cloud environments; phenomenological reading of the changing complexity of environmental reality; hub solution of knowledge and simulcast description of information of Cultural Heritage (CH); multimedia itineraries to enhance participatory and attractive processes for the community; factor of cohesion and sociality, an engine of local development. The methodology of P-BIM/CHR is articulated on the following ontologies: Interpretative and Codification, Morphology, Lexicon, Syntax, Metamorphosis, Metadata in the participatory system, Regeneration, Interaction and Sharing. From the point of view the results and conclusion the study allowed to highlight: a) Digital Regenerative synergies of representation; b) Smart CH Model for an interconnection of systems and services within a complex set of relationships.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tan, Xiaojing, Ming Zou, and Xiqin He. "Target Recognition in SAR Images Based on Multiresolution Representations with 2D Canonical Correlation Analysis." Scientific Programming 2020 (February 24, 2020): 1–9. http://dx.doi.org/10.1155/2020/7380790.

Повний текст джерела
Анотація:
This study proposes a synthetic aperture radar (SAR) target-recognition method based on the fused features from the multiresolution representations by 2D canonical correlation analysis (2DCCA). The multiresolution representations were demonstrated to be more discriminative than the solely original image. So, the joint classification of the multiresolution representations is beneficial to the enhancement of SAR target recognition performance. 2DCCA is capable of exploiting the inner correlations of the multiresolution representations while significantly reducing the redundancy. Therefore, the fused features can effectively convey the discrimination capability of the multiresolution representations while relieving the storage and computational burdens caused by the original high dimension. In the classification stage, the sparse representation-based classification (SRC) is employed to classify the fused features. SRC is an effective and robust classifier, which has been extensively validated in the previous works. The moving and stationary target acquisition and recognition (MSTAR) data set is employed to evaluate the proposed method. According to the experimental results, the proposed method could achieve a high recognition rate of 97.63% for the 10 classes of targets under the standard operating condition (SOC). Under the extended operating conditions (EOC) like configuration variance, depression angle variance, and the robustness of the proposed method are also quantitively validated. In comparison with some other SAR target recognition methods, the superiority of the proposed method can be effectively demonstrated.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Martin, Nadine. "Cognitive neuropsychological evidence for common processes underlying generation and storage of language representations." Behavioral and Brain Sciences 26, no. 6 (December 2003): 747–48. http://dx.doi.org/10.1017/s0140525x03420165.

Повний текст джерела
Анотація:
Ruchkin et al. offer a compelling case for a model of short-term storage without a separate buffer. Here, I discuss some cognitive neuropsychological data that have been offered in support of and against their model. Additionally, I discuss briefly some new directions in cognitive neuropsychological research that bear on the role of attention in Ruchkin et al.'s model.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Sibenik, Goran, and Iva Kovacic. "Interpreted open data exchange between architectural design and structural analysis models." Journal of Information Technology in Construction 26 (February 26, 2021): 39–57. http://dx.doi.org/10.36680/j.itcon.2021.004.

Повний текст джерела
Анотація:
The heterogeneity of the architecture, engineering and construction (AEC) industry reflects on digital building models, which differ across domains and planning phases. Data exchange between architectural design and structural analysis models poses a particular challenge because of dramatically different representations of building elements. Existing software tools and standards have not been able to deal with these differences. The research on inter-domain building information modelling (BIM) frameworks does not consider the geometry interpretations for data exchange. Analysis of geometry interpretations is mostly project-specific and is seldom reflected in general data exchange frameworks. By defining a data exchange framework that engages with varying requirements and representations of architectural design and structural analysis in terms of geometry, which is open to other domains, we aim to close the identified gap. Existing classification systems in software tools and standards were reviewed in order to understand architectural design and structural analysis representations and to identify the relationships between them. Following the analysis, a novel data management framework based on classification, interpretation and automation was proposed, implemented and tested. Classification is a model specification including domain-specific terms and relationships between them. Interpretations consist of inter-domain procedures necessary to generate domain-specific models from a provided model. Automation represents the connection between open domain-specific models and proprietary models in software tools. Practical implementation with a test case demonstrated a possible realization of the proposed framework. The innovative contribution of the research is a novel framework based on the system of open domain-specific classifications and procedures for the inter-domain interpretation, which can prepare domain-specific models on central storage. The main benefit is a centrally prepared domain-specific model, relieving software developers from so-far-unsuccessful implementation of complex inter-domain interpretations in each software tool, and providing end users with control over the data exchange. Although the framework is based on the exchange between architectural design and structural analysis, the proposed central data management framework can be used for other exchange processes involving different model representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

SOSA, ANNA VOGEL, and CAROL STOEL-GAMMON. "Patterns of intra-word phonological variability during the second year of life." Journal of Child Language 33, no. 1 (February 2006): 31–50. http://dx.doi.org/10.1017/s0305000905007166.

Повний текст джерела
Анотація:
Phonological representation for adult speakers is generally assumed to include sub-lexical information at the level of the phoneme. Some have suggested, however, that young children operate with more holistic lexical representations. If young children use whole-word representation and adults employ phonemic representation, then a component of phonological development includes a transition from holistic to segmental storage of phonological information. The present study addresses the nature of this transition by investigating the prevalence and patterns of intra-word production variability during the first year of lexical acquisition (1;0–2;0). Longitudinal data from four typically developing children were analysed to determine variability at each age. Patterns of variability are discussed in relation to chronological age and productive vocabulary size. Results show high overall rates of variability, as well as a peak in variability corresponding to the onset of combinatorial speech, suggesting that phonological reorganization may commence somewhat later than previously thought.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lee, Hye Won, Yu Rang Park, Jaehyun Sim, Rae Woong Park, Woo Ho Kim, and Ju Han Kim. "The Tissue Microarray Object Model: A Data Model for Storage, Analysis, and Exchange of Tissue Microarray Experimental Data." Archives of Pathology & Laboratory Medicine 130, no. 7 (July 1, 2006): 1004–13. http://dx.doi.org/10.5858/2006-130-1004-ttmoma.

Повний текст джерела
Анотація:
Abstract Context.—Tissue microarray (TMA) is an array-based technology allowing the examination of hundreds of tissue samples on a single slide. To handle, exchange, and disseminate TMA data, we need standard representations of the methods used, of the data generated, and of the clinical and histopathologic information related to TMA data analysis. Objective.—To create a comprehensive data model with flexibility that supports diverse experimental designs and with expressivity and extensibility that enables an adequate and comprehensive description of new clinical and histopathologic data elements. Design.—We designed a tissue microarray object model (TMA-OM). Both the array information and the experimental procedure models are created by referring to the microarray gene expression object model, minimum information specification for in situ hybridization and immunohistochemistry experiments, and the TMA data exchange specifications. The clinical and histopathologic information model is created by using College of American Pathologists cancer protocols and National Cancer Institute common data elements. Microarray Gene Expression Data Ontology, the Unified Medical Language System, and the terms extracted from College of American Pathologists cancer protocols and NCI common data elements are used to create a controlled vocabulary for unambiguous annotation. Result.—The TMA-OM consists of 111 classes in 17 packages to represent clinical and histopathologic information as well as experimental data for any type of cancer. We implemented a Web-based application for TMA-OM, supporting data export in XML format conforming to the TMA data exchange specifications or the document type definition derived from TMA-OM. Conclusions.—The TMA-OM provides a comprehensive data model for storage, analysis, and exchange of TMA data and facilitates model-level integration of other biological models.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Qin, Yana, Danye Wu, Zhiwei Xu, Jie Tian, and Yujun Zhang. "Adaptive In-Network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge." Mathematical Problems in Engineering 2021 (September 25, 2021): 1–14. http://dx.doi.org/10.1155/2021/9285802.

Повний текст джерела
Анотація:
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kumar, K., H. Ledoux, and J. Stoter. "COMPARATIVE ANALYSIS OF DATA STRUCTURES FOR STORING MASSIVE TINS IN A DBMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 7, 2016): 123–30. http://dx.doi.org/10.5194/isprsarchives-xli-b2-123-2016.

Повний текст джерела
Анотація:
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m<sup>2</sup>. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kumar, K., H. Ledoux, and J. Stoter. "COMPARATIVE ANALYSIS OF DATA STRUCTURES FOR STORING MASSIVE TINS IN A DBMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 7, 2016): 123–30. http://dx.doi.org/10.5194/isprs-archives-xli-b2-123-2016.

Повний текст джерела
Анотація:
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m<sup>2</sup>. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hulsman, Petra, Hubert H. G. Savenije, and Markus Hrachowitz. "Learning from satellite observations: increased understanding of catchment processes through stepwise model improvement." Hydrology and Earth System Sciences 25, no. 2 (February 24, 2021): 957–82. http://dx.doi.org/10.5194/hess-25-957-2021.

Повний текст джерела
Анотація:
Abstract. Satellite observations can provide valuable information for a better understanding of hydrological processes and thus serve as valuable tools for model structure development and improvement. While model calibration and evaluation have in recent years started to make increasing use of spatial, mostly remotely sensed information, model structural development largely remains to rely on discharge observations at basin outlets only. Due to the ill-posed inverse nature and the related equifinality issues in the modelling process, this frequently results in poor representations of the spatio-temporal heterogeneity of system-internal processes, in particular for large river basins. The objective of this study is thus to explore the value of remotely sensed, gridded data to improve our understanding of the processes underlying this heterogeneity and, as a consequence, their quantitative representation in models through a stepwise adaptation of model structures and parameters. For this purpose, a distributed, process-based hydrological model was developed for the study region, the poorly gauged Luangwa River basin. As a first step, this benchmark model was calibrated to discharge data only and, in a post-calibration evaluation procedure, tested for its ability to simultaneously reproduce (1) the basin-average temporal dynamics of remotely sensed evaporation and total water storage anomalies and (2) their temporally averaged spatial patterns. This allowed for the diagnosis of model structural deficiencies in reproducing these temporal dynamics and spatial patterns. Subsequently, the model structure was adapted in a stepwise procedure, testing five additional alternative process hypotheses that could potentially better describe the observed dynamics and pattern. These included, on the one hand, the addition and testing of alternative formulations of groundwater upwelling into wetlands as a function of the water storage and, on the other hand, alternative spatial discretizations of the groundwater reservoir. Similar to the benchmark, each alternative model hypothesis was, in a next step, calibrated to discharge only and tested against its ability to reproduce the observed spatio-temporal pattern in evaporation and water storage anomalies. In a final step, all models were re-calibrated to discharge, evaporation and water storage anomalies simultaneously. The results indicated that (1) the benchmark model (Model A) could reproduce the time series of observed discharge, basin-average evaporation and total water storage reasonably well. In contrast, it poorly represented time series of evaporation in wetland-dominated areas as well as the spatial pattern of evaporation and total water storage. (2) Stepwise adjustment of the model structure (Models B–F) suggested that Model F, allowing for upwelling groundwater from a distributed representation of the groundwater reservoir and (3) simultaneously calibrating the model with respect to multiple variables, i.e. discharge, evaporation and total water storage anomalies, provided the best representation of all these variables with respect to their temporal dynamics and spatial patterns, except for the basin-average temporal dynamics in the total water storage anomalies. It was shown that satellite-based evaporation and total water storage anomaly data are not only valuable for multi-criteria calibration, but can also play an important role in improving our understanding of hydrological processes through the diagnosis of model deficiencies and stepwise model structural improvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sundby, Tiril, Julia Maria Graham, Adil Rasheed, Mandar Tabib, and Omer San. "Geometric Change Detection in Digital Twins." Digital 1, no. 2 (April 15, 2021): 111–29. http://dx.doi.org/10.3390/digital1020009.

Повний текст джерела
Анотація:
Digital twins are meant to bridge the gap between real-world physical systems and virtual representations. Both stand-alone and descriptive digital twins incorporate 3D geometric models, which are the physical representations of objects in the digital replica. Digital twin applications are required to rapidly update internal parameters with the evolution of their physical counterpart. Due to an essential need for having high-quality geometric models for accurate physical representations, the storage and bandwidth requirements for storing 3D model information can quickly exceed the available storage and bandwidth capacity. In this work, we demonstrate a novel approach to geometric change detection in a digital twin context. We address the issue through a combined solution of dynamic mode decomposition (DMD) for motion detection, YOLOv5 for object detection, and 3D machine learning for pose estimation. DMD is applied for background subtraction, enabling detection of moving foreground objects in real-time. The video frames containing detected motion are extracted and used as input to the change detection network. The object detection algorithm YOLOv5 is applied to extract the bounding boxes of detected objects in the video frames. Furthermore, we estimate the rotational pose of each object in a 3D pose estimation network. A series of convolutional neural networks (CNNs) conducts feature extraction from images and 3D model shapes. Then, the network outputs the camera orientation’s estimated Euler angles concerning the object in the input image. By only storing data associated with a detected change in pose, we minimize necessary storage and bandwidth requirements while still recreating the 3D scene on demand. Our assessment of the new geometric detection framework shows that the proposed methodology could represent a viable tool in emerging digital twin applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Liu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren, and Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.

Повний текст джерела
Анотація:
Cross-modal hashing has been receiving increasing interests for its low storage cost and fast query speed in multi-modal data retrievals. However, most existing hashing methods are based on hand-crafted or raw level features of objects, which may not be optimally compatible with the coding process. Besides, these hashing methods are mainly designed to handle simple pairwise similarity. The complex multilevel ranking semantic structure of instances associated with multiple labels has not been well explored yet. In this paper, we propose a ranking-based deep cross-modal hashing approach (RDCMH). RDCMH firstly uses the feature and label information of data to derive a semi-supervised semantic ranking list. Next, to expand the semantic representation power of hand-crafted features, RDCMH integrates the semantic ranking information into deep cross-modal hashing and jointly optimizes the compatible parameters of deep feature representations and of hashing functions. Experiments on real multi-modal datasets show that RDCMH outperforms other competitive baselines and achieves the state-of-the-art performance in cross-modal retrieval applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tucker-Drob, Elliot M., and Timothy A. Salthouse. "METHODS AND MEASURES: Confirmatory Factor Analysis and Multidimensional Scaling for Construct Validation of Cognitive Abilities." International Journal of Behavioral Development 33, no. 3 (February 25, 2009): 277–85. http://dx.doi.org/10.1177/0165025409104489.

Повний текст джерела
Анотація:
Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data ( N = 8,813; ages 17—97 years) aggregated across 38 separate studies, MDS was applied to 16 cognitive variables representative of five well-established cognitive abilities. Parallel to confirmatory factor analytic solutions, and consistent with past MDS applications, the results for young (18—39 years), middle (40—65 years), and old (66—97 years) adult age groups consistently revealed a two-dimensional radex disk, with variables from fluid reasoning tests located at the center. Using a new method, target measures hypothesized to reflect three aspects of cognitive control ( updating, storage-plus-processing, and executive functioning) were projected onto the radex disk. Parallel to factor analytic results, these variables were also found to be centrally located in the cognitive ability space. The advantages and limitations of the radex representation are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Andrade Ribeiro, Leonardo, and Theo Härder. "Pushing similarity joins down to the storage layer in XML databases." International Journal of Web Information Systems 13, no. 1 (April 18, 2017): 55–71. http://dx.doi.org/10.1108/ijwis-04-2016-0019.

Повний текст джерела
Анотація:
Purpose This article aims to explore how to incorporate similarity joins into XML database management systems (XDBMSs). The authors aim to provide seamless and efficient integration of similarity joins on tree-structured data into an XDBMS architecture. Design/methodology/approach The authors exploit XDBMS-specific features to efficiently generate XML tree representations for similarity matching. In particular, the authors push down a large part of the structural similarity evaluation close to the storage layer. Findings Empirical experiments were conducted to measure and compare accuracy, performance and scalability of the tree similarity join using different similarity functions and on the top of different storage models. The results show that the authors’ proposal delivers performance and scalability without hurting the accuracy. Originality/value Similarity join is a fundamental operation for data integration. Unfortunately, none of the XDBMS architectures proposed so far provides an efficient support for this operation. Evaluating similarity joins on XML is challenging, because it requires similarity matching on the text and structure. In this work, the authors integrate similarity joins into an XDBMS. To the best of the authors’ knowledge, this work is the first to leverage the storage scheme of an XDBMS to support XML similarity join processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Luciani, Peter D., James Y. Li, and Douglas Banting. "Distributed urban storm water modeling within GIS integrating analytical probabilistic hydrologic models and remote sensing image analyses." Water Quality Research Journal 46, no. 3 (August 1, 2011): 183–99. http://dx.doi.org/10.2166/wqrjc.2011.113.

Повний текст джерела
Анотація:
Analytical probabilistic hydrologic models (APMs) are computationally efficient producing validated storm water outputs comparable to continuous simulation for storm water planning level analyses. To date, APMs have been run as spatially lumped or semi-distributed models relying upon calibrated and spatially averaged system state variable inputs/parameters limiting model system representation and ultimately impacting model uncertainty. Here, APMs are integrated within Geographic Information Systems (GIS) and remote sensing image analyses (RSIA) deriving a planning-level distributed model under refined model system representation. The hypothesis is refinements alone, foregoing model calibration, will produce trial average annual storm water runoff volume estimates comparable to former research estimates (employing calibration) demonstrating the benefits of improved APM system representation and detail. To test the hypothesis three key system state variables – sewershed area, runoff coefficients and depression storage – are digitally extracted in GIS and RSIA through: automated delineation upon a digitally inscribed digital elevation model; unsupervised classification of an orthophotograph; and a slope-based expression, respectively. The parameters are spatially-distributed as continuous raster data layers and integrated with an APM. Spatially-distributed trial runoff volumes are within a range of 4–29% of earlier lumped/semi-distributed research estimates validating the hypothesis that further detail and physically-explicit representations of model systems improve simulation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Sergi, Domenic Mario, and Jie Li. "Applications of GIS-Enhanced Networks of Engineering Information." Applied Mechanics and Materials 444-445 (October 2013): 1672–79. http://dx.doi.org/10.4028/www.scientific.net/amm.444-445.1672.

Повний текст джерела
Анотація:
The current manner in which engineering data, especially the structural details of buildings and infrastructure, is managed is highly inefficient and leads to a wide variety of unnecessary costs and risks. The revolution in Building Information Modelling (BIM) has given designers the ability to perform useful technical analysis on lifelike models and representations of a future structure. Consequently, the quantity of information being produced for a typical project, and the cost of producing that information, has increased substantially. This is driving a shift towards better systems of data storage and sharing. It is the contention of this report to demonstrate that structural design is a process which can be largely divided, automated, and outsourced. The conclusion reached is that a Building Information Model, when linked with a Geographical Information System (GIS), could provide enough information to conduct the entire design process. It is upon this basis that a radical new system for the post-construction storage and sharing of BIM is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hong, D., J. Yao, X. Wu, J. Chanussot, and X. Zhu. "SPATIAL-SPECTRAL MANIFOLD EMBEDDING OF HYPERSPECTRAL DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 423–28. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-423-2020.

Повний текст джерела
Анотація:
Abstract. In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Gray, Shelley, Hope Lancaster, Mary Alt, Tiffany P. Hogan, Samuel Green, Roy Levy, and Nelson Cowan. "The Structure of Word Learning in Young School-Age Children." Journal of Speech, Language, and Hearing Research 63, no. 5 (May 22, 2020): 1446–66. http://dx.doi.org/10.1044/2020_jslhr-19-00186.

Повний текст джерела
Анотація:
Purpose We investigated four theoretically based latent variable models of word learning in young school-age children. Method One hundred sixty-seven English-speaking second graders with typical development from three U.S. states participated. They completed five different tasks designed to assess children's creation, storage, retrieval, and production of the phonological and semantic representations of novel words and their ability to link those representations. The tasks encompassed the triggering and configuration stages of word learning. Results Results showed that a latent variable model with separate phonological and semantic factors and linking indicators constrained to load on the phonological factor best fit the data. Discussion The structure of word learning during triggering and configuration reflects separate but related phonological and semantic factors. We did not find evidence for a unidimensional latent variable model of word learning or for separate receptive and expressive word learning factors. In future studies, it will be interesting to determine whether the structure of word learning differs during the engagement stage of word learning when phonological and semantic representations, as well as the links between them, are sufficiently strong to affect other words in the lexicon.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

ALEARDI, LUCA CASTELLI, OLIVIER DEVILLERS, and ABDELKRIM MEBARKI. "CATALOG-BASED REPRESENTATION OF 2D TRIANGULATIONS." International Journal of Computational Geometry & Applications 21, no. 04 (August 2011): 393–402. http://dx.doi.org/10.1142/s021819591100372x.

Повний текст джерела
Анотація:
Several Representations and Coding schemes have been proposed to represent efficiently 2D triangulations. In this paper, we propose a new practical approach to reduce the main memory space needed to represent an arbitrary triangulation, while maintaining constant time for some basic queries. This work focuses on the connectivity information of the triangulation, rather than the geometric information (vertex coordinates), since the combinatorial data represents the main part of the storage. The main idea is to gather triangles into patches, to reduce the number of pointers by eliminating the internal pointers in the patches and reducing the multiple references to vertices. To accomplish this, we define and use stable catalogs of patches that are closed under basic standard update operations such as insertion and deletion of vertices, and edge flips. We present some bounds and results concerning special catalogs, and some experimental results that exhibit the practical gain of such methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Nazemi, A., and H. S. Wheater. "On inclusion of water resource management in Earth System models – Part 2: Representation of water supply and allocation and opportunities for improved modeling." Hydrology and Earth System Sciences Discussions 11, no. 7 (July 21, 2014): 8299–354. http://dx.doi.org/10.5194/hessd-11-8299-2014.

Повний текст джерела
Анотація:
Abstract. Human water use has significantly increased during the recent past. Water allocation from surface and groundwater sources has altered terrestrial discharge and storage, with large variability in time and space. Water supply and allocation, therefore, should be considered with water demand and appropriately included in large-scale models to address various online and offline implications, with or without considering possible climate interactions. Here, we review the algorithms developed to represent the elements of water supply and allocation in large-scale models, in particular Land Surface Schemes and Global Hydrologic Models. We noted that some potentially-important online implications, such as the effects of large reservoirs on land-atmospheric feedbacks, have not yet been addressed. Regarding offline implications, we find that there are important elements, such as groundwater availability and withdrawals, and the representation of large reservoirs, which should be improved. Major sources of uncertainty in offline simulations include data support, water allocation algorithms and host large-scale models. Considering these findings with those highlighted in our companion paper, we note that advancements in computation, host models, system identification algorithms as well as remote sensing and data assimilation products can facilitate improved representations of water resource management at larger scales. We further propose a modular development framework to consider and test multiple datasets, algorithms and host models in a unified model diagnosis and uncertainty assessment framework. We suggest that such a framework is required to systematically improve current representations of water resource management in Earth System models. A key to this development is the availability of regional scale data. We argue that the time is right for a global initiative, based on regional case studies, to move this agenda forward.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Krawczyk, Artur. "A concept for the modernization of underground mining master maps based on the enrichment of data definitions and spatial database technology." E3S Web of Conferences 26 (2018): 00010. http://dx.doi.org/10.1051/e3sconf/20182600010.

Повний текст джерела
Анотація:
In this article, topics regarding the technical and legal aspects of creating digital underground mining maps are described. Currently used technologies and solutions for creating, storing and making digital maps accessible are described in the context of the Polish mining industry. Also, some problems with the use of these technologies are identified and described. One of the identified problems is the need to expand the range of mining map data provided by survey departments to other mining departments, such as ventilation maintenance or geological maintenance. Three solutions are proposed and analyzed, and one is chosen for further analysis. The analysis concerns data storage and making survey data accessible not only from paper documentation, but also directly from computer systems. Based on enrichment data, new processing procedures are proposed for a new way of presenting information that allows the preparation of new cartographic representations (symbols) of data with regard to users’ needs.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Nolé, Maurizio, and Carlo Sartiani. "Graph Management Systems: A Qualitative Survey." APTIKOM Journal on Computer Science and Information Technologies 5, no. 1 (March 30, 2020): 37–49. http://dx.doi.org/10.34306/csit.v5i1.132.

Повний текст джерела
Анотація:
In the recent years many real-world applications have been modeled by graph structures (e.g., social networks, mobile phone networks, web graphs, etc.), and many systems have been developed to manage, query, and analyze these datasets. These systems could be divided into specialized graph database systems and large-scale graph analytics systems. The first ones consider end-to-end data management issues including storage representations, transactions, and query languages, whereas the second ones focus on processing specific tasks over large data graphs. In this paper we provide an overview of several graph database systems and graph processing systems, with the aim of assisting the reader in identifying the best-suited solution for her application scenario.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Nolé, Maurizio, and Carlo Sartiani. "Graph Management Systems: A Qualitative Survey." APTIKOM Journal on Computer Science and Information Technologies 3, no. 2 (July 1, 2018): 66–76. http://dx.doi.org/10.11591/aptikom.j.csit.129.

Повний текст джерела
Анотація:
In the recent years many real-world applications have been modeled by graph structures (e.g., social networks, mobile phone networks, web graphs, etc.), and many systems have been developed to manage, query, and analyze these datasets. These systems could be divided into specialized graph database systems and large-scale graph analytics systems. The first ones consider end-to-end data management issues including storage representations, transactions, and query languages, whereas the second ones focus on processing specific tasks over large data graphs. In this paper we provide an overview of several graph database systems and graph processing systems, with the aim of assisting the reader in identifying the best-suited solution for her application scenario.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Mahowald, Kyle, George Kachergis, and Michael C. Frank. "What counts as an exemplar model, anyway? A commentary on Ambridge (2020)." First Language 40, no. 5-6 (February 27, 2020): 608–11. http://dx.doi.org/10.1177/0142723720905920.

Повний текст джерела
Анотація:
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec – which have been extremely successful in modern natural language processing (NLP) applications – count? Although these models often have ample parametric complexity to store exemplars from their training data, they also go far beyond simple storage by processing and compressing the input via their architectural constraints. The resulting representations have been shown to encode emergent abstractions. If these models are exemplar-based then Ambridge’s theory only weakly constrains future work. On the other hand, if these systems are not exemplar models, why is it that true exemplar models are not contenders in modern NLP?
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Futter, M. N., M. A. Erlandsson, D. Butterfield, P. G. Whitehead, S. K. Oni, and A. J. Wade. "PERSiST: the precipitation, evapotranspiration and runoff simulator for solute transport." Hydrology and Earth System Sciences Discussions 10, no. 7 (July 3, 2013): 8635–81. http://dx.doi.org/10.5194/hessd-10-8635-2013.

Повний текст джерела
Анотація:
Abstract. While runoff is often a first-order control on water quality, runoff generation processes and pathways can vary widely between catchments. Credible simulations of solute and pollutant transport in surface waters are dependent on models which facilitate appropriate representations of perceptual models of the runoff generation process. With a few exceptions, models used in solute transport simulations enforce a single, potentially inappropriate representation of the runoff generation process. Here, we present a flexible, semi-distributed landscape scale rainfall-runoff model suitable for simulating a broad range of user-specified perceptual models of runoff generation and stream flow occurring in different climatic regions and landscape types. PERSiST, the Precipitation, Evapotranspiration and Runoff Simulator for Solute Transport; is designed for simulating present day conditions and projecting possible future effects of climate or land use change on runoff, catchment water storage and solute transport. PERSiST has limited data requirements and is calibrated using observed time series of precipitation, air temperature and runoff at one or more points in a river network. Here, we present a first application of the model to the Thames River in the UK and describe a Monte Carlo tool for parameter optimization and sensitivity analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Schonfelder, J. L. "Variable Precision Arithmetic: A Fortran 95 Module." Scientific Programming 11, no. 1 (2003): 67–76. http://dx.doi.org/10.1155/2003/124580.

Повний текст джерела
Анотація:
This paper describes the design and development of a software package supporting variable precision arithmetic as a semantic extension to the Fortran 95 language. The working precision of the arithmetic supported by this package can be dynamically and arbitrarily varied. The facility exploits the data-abstraction capabilities of Fortran 95 and allows the operations to be used elementally with array operands as well as with scalars. The number system is defined in such a way as to be closed under all of the basic operations of normal arithmetic; no program-terminating numerical exceptions can occur. Precision loss situations like underflow and overflow are handled by defining special value representations that preserve as much of the numeric information as is practical and the operation semantics are defined so that these exceptional values propagate as appropriate to reflect this loss of information. The number system uses an essentially conventional variable precision floating-point representation. When operations can be performed exactly within the currently-set working precision limit, the excess trailing zero digits are not stored, nor do they take part in future operations. This is both economical in storage and improves efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Rafique, Rashid, Jianyang Xia, Oleksandra Hararuk, Ghassem R. Asrar, Guoyong Leng, Yingping Wang, and Yiqi Luo. "Divergent predictions of carbon storage between two global land models: attribution of the causes through traceability analysis." Earth System Dynamics 7, no. 3 (July 29, 2016): 649–58. http://dx.doi.org/10.5194/esd-7-649-2016.

Повний текст джерела
Анотація:
Abstract. Representations of the terrestrial carbon cycle in land models are becoming increasingly complex. It is crucial to develop approaches for critical assessment of the complex model properties in order to understand key factors contributing to models' performance. In this study, we applied a traceability analysis which decomposes carbon cycle models into traceable components, for two global land models (CABLE and CLM-CASA′) to diagnose the causes of their differences in simulating ecosystem carbon storage capacity. Driven with similar forcing data, CLM-CASA′ predicted ∼ 31 % larger carbon storage capacity than CABLE. Since ecosystem carbon storage capacity is a product of net primary productivity (NPP) and ecosystem residence time (τE), the predicted difference in the storage capacity between the two models results from differences in either NPP or τE or both. Our analysis showed that CLM-CASA′ simulated 37 % higher NPP than CABLE. On the other hand, τE, which was a function of the baseline carbon residence time (τ′E) and environmental effect on carbon residence time, was on average 11 years longer in CABLE than CLM-CASA′. This difference in τE was mainly caused by longer τ′E of woody biomass (23 vs. 14 years in CLM-CASA′), and higher proportion of NPP allocated to woody biomass (23 vs. 16 %). Differences in environmental effects on carbon residence times had smaller influences on differences in ecosystem carbon storage capacities compared to differences in NPP and τ′E. Overall, the traceability analysis showed that the major causes of different carbon storage estimations were found to be parameters setting related to carbon input and baseline carbon residence times between two models.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Schlicht, Erik J., and Paul R. Schrater. "Impact of Coordinate Transformation Uncertainty on Human Sensorimotor Control." Journal of Neurophysiology 97, no. 6 (June 2007): 4203–14. http://dx.doi.org/10.1152/jn.00160.2007.

Повний текст джерела
Анотація:
Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ai, Qingyao. "Neural generative models and representation learning for information retrieval." ACM SIGIR Forum 53, no. 2 (December 2019): 97. http://dx.doi.org/10.1145/3458553.3458565.

Повний текст джерела
Анотація:
Information Retrieval (IR) concerns about the structure, analysis, organization, storage, and retrieval of information. Among different retrieval models proposed in the past decades, generative retrieval models, especially those under the statistical probabilistic framework, are one of the most popular techniques that have been widely applied to Information Retrieval problems. While they are famous for their well-grounded theory and good empirical performance in text retrieval, their applications in IR are often limited by their complexity and low extendability in the modeling of high-dimensional information. Recently, advances in deep learning techniques provide new opportunities for representation learning and generative models for information retrieval. In contrast to statistical models, neural models have much more flexibility because they model information and data correlation in latent spaces without explicitly relying on any prior knowledge. Previous studies on pattern recognition and natural language processing have shown that semantically meaningful representations of text, images, and many types of information can be acquired with neural models through supervised or unsupervised training. Nonetheless, the effectiveness of neural models for information retrieval is mostly unexplored. In this thesis, we study how to develop new generative models and representation learning frameworks with neural models for information retrieval. Specifically, our contributions include three main components: (1) Theoretical Analysis : We present the first theoretical analysis and adaptation of existing neural embedding models for ad-hoc retrieval tasks; (2) Design Practice : Based on our experience and knowledge, we show how to design an embedding-based neural generative model for practical information retrieval tasks such as personalized product search; And (3) Generic Framework : We further generalize our proposed neural generative framework for complicated heterogeneous information retrieval scenarios that concern text, images, knowledge entities, and their relationships. Empirical results show that the proposed neural generative framework can effectively learn information representations and construct retrieval models that outperform the state-of-the-art systems in a variety of IR tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Buddhika, Thilina, Matthew Malensek, Shrideep Pallickara, and Sangmi Lee Pallickara. "Living on the Edge." ACM Transactions on Internet of Things 2, no. 3 (July 2021): 1–31. http://dx.doi.org/10.1145/3450767.

Повний текст джерела
Анотація:
Voluminous time-series data streams produced in continuous sensing environments impose challenges pertaining to ingestion, storage, and analytics. In this study, we present a holistic approach based on data sketching to address these issues. We propose a hyper-sketching algorithm that combines discretization and frequency-based sketching to produce compact representations of the multi-feature, time-series data streams. We generate an ensemble of data sketches to make effective use of capabilities at the resource-constrained edge devices, the links over which data are transmitted, and the server pool where this data must be stored. The data sketches can be queried to construct datasets that are amenable to processing using popular analytical engines. We include several performance benchmarks using real-world data from different domains to profile the suitability of our design decisions. The proposed methodology can achieve up to ∼ 13 × and ∼ 2, 207 × reduction in data transfer and energy consumption at edge devices. We observe up to a ∼ 50% improvement in analytical job completion times in addition to the significant improvements in disk and network I/O.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chakravarthi, K. Kalyana, and Vaidhehi Vijayakumar. "Workflow Scheduling Techniques and Algorithms in IaaS Cloud: A Survey." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 2 (April 1, 2018): 853. http://dx.doi.org/10.11591/ijece.v8i2.pp853-866.

Повний текст джерела
Анотація:
In the modern era, workflows are adopted as a powerful and attractive paradigm for expressing/solving a variety of applications like scientific, data intensive computing, and big data applications such as MapReduce and Hadoop. These complex applications are described using high-level representations in workflow methods. With the emerging model of cloud computing technology, scheduling in the cloud becomes the important research topic. Consequently, workflow scheduling problem has been studied extensively over the past few years, from homogeneous clusters, grids to the most recent paradigm, cloud computing. The challenges that need to be addressed lies in task-resource mapping, QoS requirements, resource provisioning, performance fluctuation, failure handling, resource scheduling, and data storage. This work focuses on the complete study of the resource provisioning and scheduling algorithms in cloud environment focusing on Infrastructure as a service (IaaS). We provided a comprehensive understanding of existing scheduling techniques and provided an insight into research challenges that will be a possible future direction to the researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Rafique, R., J. Xia, O. Hararuk, G. Asrar, Y. Wang, and Y. Luo. "Divergent predictions of carbon storage between two global land models: attribution of the causes through traceability analysis." Earth System Dynamics Discussions 6, no. 2 (August 27, 2015): 1579–604. http://dx.doi.org/10.5194/esdd-6-1579-2015.

Повний текст джерела
Анотація:
Abstract. Representations of the terrestrial carbon cycle in land models are becoming increasingly complex. It is crucial to develop approaches for critical assessment of the complex model properties in order to understand key factors contributing to models' performance. In this study, we applied a traceability analysis, which decomposes carbon cycle models into traceable components, to two global land models (CABLE and CLM-CASA') to diagnose the causes of their differences in simulating ecosystem carbon storage capacity. Driven with similar forcing data, the CLM-CASA' model predicted ~ 31 % larger carbon storage capacity than the CABLE model. Since ecosystem carbon storage capacity is a product of net primary productivity (NPP) and ecosystem residence time (τE), the predicted difference in the storage capacity between the two models results from differences in either NPP or τE or both. Our analysis showed that CLM-CASA' simulated 37 % higher NPP than CABLE due to higher rates of carboxylation (Vcmax) in CLM-CASA'. On the other hand, τE, which was a function the baseline carbon residence time (τ'E) and environmental effect on carbon residence time, was on average 11 years longer in CABLE than CLM-CASA'. The difference in τE was mainly found to be caused by longer τ'E in CABLE than CLM-CASA'. This difference in τE was mainly caused by longer τ'E of woody biomass (23 vs. 14 years in CLM-CASA') and higher proportion of NPP allocated to woody biomass (23 vs. 16 %). Differences in environmental effects on carbon residence times had smaller influences on differences in ecosystem carbon storage capacities compared to differences in NPP and τ'E. Overall; the traceability analysis is an effective method for identifying sources of variations between the two models.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gehrung, Joachim, Marcus Hebel, Michael Arens, and Uwe Stilla. "A FRAMEWORK FOR VOXEL-BASED GLOBAL SCALE MODELING OF URBAN ENVIRONMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W1 (October 26, 2016): 45–51. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w1-45-2016.

Повний текст джерела
Анотація:
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Weymouth, John W. "Archaeological site surveying program at the University of Nebraska." GEOPHYSICS 51, no. 3 (March 1986): 538–52. http://dx.doi.org/10.1190/1.1442108.

Повний текст джерела
Анотація:
A summary of geophysical applications peculiar to archaeology and magnetic surveying techniques applied by the University of Nebraska to archaeological sites is presented. In contrast to geophysical targets, the size and depth of the features of interest on archaeological sites are from several centimeters to a few meters. Typical features are historic foundations, wells, privies or prehistoric earthen features such as earth house floors, storage pits, or fire hearths. The most commonly used geophysical methods are resistivity, radar, and magnetometry. The program at the University of Nebraska has concentrated on magnetic surveying field methods based on the use of two magnetometers in the difference mode to correct for temporal variations. Data processing used both microcomputers and mainframe computers. Microcomputers are used in the field and near sites to log data and to do preliminary mapping. Mainframe computers are used for further processing and filtering and for producing a variety of graphical representations of the data for an archaeological audience. Case histories presented are from site surveys in North Dakota, Oklahoma, Colorado, and Nebraska.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Vajda, Szilárd, Thomas Plötz, and Gernot A. Fink. "Camera-Based Whiteboard Reading for Understanding Mind Maps." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 03 (April 27, 2015): 1553003. http://dx.doi.org/10.1142/s0218001415530031.

Повний текст джерела
Анотація:
Mind maps, i.e. the spatial organization of ideas and concepts around a central topic and the visualization of their relations, represent a very powerful and thus popular means to support creative thinking and problem solving processes. Typically created on traditional whiteboards, they represent an important technique for collaborative brainstorming sessions. We describe a camera-based system to analyze hand-drawn mind maps written on a whiteboard. The goal of the presented system is to produce digital representations of such mind maps, which would enable digital asset management, i.e. storage and retrieval of manually created documents. Our system is based on image acquisition by means of a camera, followed by the segmentation of the particular whiteboard image focusing on the extraction of written context, i.e. the ideas captured by the mind map. The spatial arrangement of these ideas is recovered using layout analysis based on unsupervised clustering, which results in graph representations of mind maps. Finally, handwriting recognition derives textual transcripts of the ideas captured by the mind map. We demonstrate the capabilities of our mind map reading system by means of an experimental evaluation, where we analyze images of mind maps that have been drawn on whiteboards, without any further constraints other than the underlying topic. In addition to the promising recognition results, we also discuss training strategies, which effectively allow for system bootstrapping using out-of-domain sample data. The latter is important when addressing creative thinking processes where domain-related training data are difficult to obtain as they focus on novelty by definition.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Monteiro, Carlos, and José Leal. "Managing experiments on cognitive processes in writing with HandSpy." Computer Science and Information Systems 10, no. 4 (2013): 1747–73. http://dx.doi.org/10.2298/csis121130061m.

Повний текст джерела
Анотація:
Experiments on cognitive processes require a detailed analysis of the contribution of many participants. In the case of cognitive processes in writing, these experiments require special software tools to collect gestures performed with a pen or a stylus, and recorded with special hardware. These tools produce different kinds of data files in binary and proprietary formats that need to be managed on a workstation file system for further processing with generic tools, such as spreadsheets and statistical analysis software. The lack of common formats and open repositories hinders the possibility of distributing the workload among researchers within the research group, of re-processing the collected data with software developed by other research groups, and of sharing results with the rest of the cognitive processes research community. This paper describes the development of HandSpy, a collaborative environment for managing experiments in the cognitive processes in writing. This environment was designed to cover all the stages of the experiment, from the definition of tasks to be performed by participants, to the synthesis of results. Collaboration in HandSpy is enabled by a rich web interface. To decouple the environment from existing hardware devices for collecting written production, namely digitizing tablets and smart pens, HandSpy is based on the InkML standard, an XML data format for representing digital ink. This design choice shaped many of the features in HandSpy, such as the use of an XML database for managing application data and the use of XML transformations. XML transformations convert between persistent data representations used for storage and transient data representations required by the widgets on the user interface. Despite being a system independent from a specific collecting device, for the system validation, a framework for data collection was created. This framework has also been highlighted in the paper due to the important role it took in a data collection process, of a scientific project to study the cognitive processes involved in writing.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Grant, Carl, and Chad Mairn. "3D, virtual, augmented, extended, mixed reality, and extended content forms: The technology and the challenges." Information Services & Use 40, no. 3 (November 10, 2020): 225–30. http://dx.doi.org/10.3233/isu-200086.

Повний текст джерела
Анотація:
3D representations are a new form of information that when coupled with new technology tools, like VR, AR, MR, 3D scanning/printing, and more, offer new support and opportunities for research and pedagogy, often with dramatic leaps in capabilities and results. When combined with a pandemic, the results can be even more dramatic and valued in a variety of collaborative and higher education applications. However, as with any new technology and tools, there also can be sizable challenges that result and will need to be addressed. These include accessibility, 3D object creation, hardware capabilities, storage, and organizing tools. All represent areas where the community of users and their standards organizations, like NISO, need to move aggressively to develop best practices, guidelines, and standards to ensure these new forms of data and technology tools are widely-accessible. This paper provides a high-level overview to introduce people to the new information form, associated technologies, and their challenges.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

CZYZOWICZ, JUREK, WOJCIECH FRACZAK, ANDRZEJ PELC, and WOJCIECH RYTTER. "LINEAR-TIME PRIME DECOMPOSITION OF REGULAR PREFIX CODES." International Journal of Foundations of Computer Science 14, no. 06 (December 2003): 1019–31. http://dx.doi.org/10.1142/s0129054103002151.

Повний текст джерела
Анотація:
One of the new approaches to data classification uses prefix codes and finite state automata as representations of prefix codes. A prefix code is a (possibly infinite) set of strings such that no string is a prefix of another one. An important task driven by the need for the efficient storage of such automata in memory is the decomposition (in the sense of formal languages concatenation) of prefix codes into prime factors. We investigate properties of such prefix code decompositions. A prime decomposition is a decomposition of a prefix code into a concatenation of nontrivial prime prefix codes. A prefix code is prime if it cannot be decomposed into at least two nontrivial prefix codes. In the paper a linear time algorithm is designed which finds the prime decomposition F1F2…Fk of a regular prefix code F given by its minimal deterministic automaton. Our results are especially interesting for infinite regular prefix codes.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Gelbard, Roy, and Israel Spiegler. "Representation and Storage of Motion Data." Journal of Database Management 13, no. 3 (July 2002): 46–63. http://dx.doi.org/10.4018/jdm.2002070104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Rizzuto, Daniel S., and Michael J. Kahana. "An Autoassociative Neural Network Model of Paired-Associate Learning." Neural Computation 13, no. 9 (September 1, 2001): 2075–92. http://dx.doi.org/10.1162/089976601750399317.

Повний текст джерела
Анотація:
Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Yatsenko, E. A. "Overview of Object-Oriented Paradigm in an Appendix to the Development of Databases." Vestnik NSU. Series: Information Technologies 17, no. 3 (2019): 123–34. http://dx.doi.org/10.25205/1818-7900-2019-17-3-123-134.

Повний текст джерела
Анотація:
The article provides an overview of projects, technologies, software products developed to implement the ideas of an object-oriented approach to database design. In the 80s of the 20th century, there were many projects devoted to the idea of OODB, many experts expected that in the near future relational databases would be crowded out with objectoriented ones. Despite the impressive number of projects conducted by both teams of scientists and commercial companies focused on practical implementation, there was no clear formulation of an object-oriented data model, each team presented its own vision of applying object-oriented concepts to database design. The absence of a universal data model, with a well-developed mathematical apparatus (as in the case of relational databases), is still the main problem in the distribution of an OODBMS. However, the use of relational DBMS raises a lot of problems that are most acutely felt in areas such as computer-aided design, computer-aided production, knowledge-based systems, and others. OODB allow to combine the program code and data, to avoid differences between the representations of information in the database and the application program, as a result of which modern developers show interest in them. There are a lot of OODBMS, but they cannot compete with the largest storage organization systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Whitfield, R. I., A. H. B. Duffy, and Z. Wu. "Ship Product Modeling." Journal of Ship Production 19, no. 04 (November 1, 2003): 230–45. http://dx.doi.org/10.5957/jsp.2003.19.4.230.

Повний текст джерела
Анотація:
This paper is a fundamental review of ship product modeling techniques with a focus on determining the state of the art, to identify any shortcomings and propose future directions. The review addresses ship product data representations, product modeling techniques and integration issues, and life phase issues. The most significant development has been the construction of the ship Standard for the Exchange of Product Data (STEP) application protocols. However, difficulty has been observed with respect to the general uptake of the standards, in particular with the application to legacy systems, often resulting in embellishments to the standards and limiting the ability to further exchange the product data. The EXPRESS modeling language is increasingly being superseded by the extensible mark-up language (XML) as a method to map the STEP data, due to its wider support throughout the information technology industry and its more obvious structure and hierarchy. The associated XML files are, however, larger than those produced using the EXPRESS language and make further demands on the already considerable storage required for the ship product model. Seamless integration between legacy applications appears to be difficult to achieve using the current technologies, which often rely on manual interaction for the translation of files. The paper concludes with a discussion of future directions that aim to either solve or alleviate these issues.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії