Academic literature on the topic 'Data storage representation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data storage representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data storage representation"

1

Gelbard, Roy, and Israel Spiegler. "Representation and Storage of Motion Data." Journal of Database Management 13, no. 3 (July 2002): 46–63. http://dx.doi.org/10.4018/jdm.2002070104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gutsche, Oliver, and Igor Mandrichenko. "Striped Data Analysis Framework." EPJ Web of Conferences 245 (2020): 06042. http://dx.doi.org/10.1051/epjconf/202024506042.

Full text
Abstract:
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approach allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.
APA, Harvard, Vancouver, ISO, and other styles
3

Vakali, Athena, and Evimaria Terzi. "Multimedia data storage and representation issues on tertiary storage subsystems." ACM SIGOPS Operating Systems Review 35, no. 2 (April 2001): 61–77. http://dx.doi.org/10.1145/377069.377087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cimino, James J. "Data storage and knowledge representation for clinical workstations." International Journal of Bio-Medical Computing 34, no. 1-4 (January 1994): 185–94. http://dx.doi.org/10.1016/0020-7101(94)90021-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sheikhizadeh, Siavash, M. Eric Schranz, Mehmet Akdel, Dick de Ridder, and Sandra Smit. "PanTools: representation, storage and exploration of pan-genomic data." Bioinformatics 32, no. 17 (September 1, 2016): i487—i493. http://dx.doi.org/10.1093/bioinformatics/btw455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fischer, Felix, M. Alper Selver, Sinem Gezer, Oğuz Dicle, and Walter Hillen. "Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data." Journal of Medical and Biological Engineering 35, no. 6 (November 18, 2015): 709–23. http://dx.doi.org/10.1007/s40846-015-0097-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yuzhen, Jianming Lu, Jihong Guan, Mingying Fan, Ayman Haggag, and Takashi Yahagi. "GML Topology Data Storage Schema Design." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 701–8. http://dx.doi.org/10.20965/jaciii.2007.p0701.

Full text
Abstract:
Geography Markup Language (GML) was developed to standardize the representation of geographical data in extensible markup language (XML), which facilitates geographical information exchange and sharing. Increasing amounts of geographical data are being presented in GML as its use widens, raising the question of how to store GML data efficiently to facilitate its management and retrieval. We analyze topology data in GML and propose storing nonspatial and spatial data from GML documents in spatial databases (e.g, Oracle Spatial, DB2 Spatial, and PostGIS/PostgreSQL.). We then use an example to analyze the topology relation.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Sang Hun, and Kunwoo Lee. "Partial Entity Structure: A Compact Boundary Representation for Non-Manifold Geometric Modeling." Journal of Computing and Information Science in Engineering 1, no. 4 (November 1, 2001): 356–65. http://dx.doi.org/10.1115/1.1433486.

Full text
Abstract:
Non-manifold boundary representations have become very popular in recent years and various representation schemes have been proposed, as they represent a wider range of objects, for various applications, than conventional manifold representations. As these schemes mainly focus on describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy storage space redundantly, although they are very efficient in answering queries on topological adjacency relationships. To solve this problem, in this paper, we propose a compact as well as fast non-manifold boundary representation, called the partial entity structure. This representation reduces the storage size to half that of the radial edge structure, which is one of the most popular and efficient of existing data structures, while allowing full topological adjacency relationships to be derived without loss of efficiency. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar, Randhir, and Rakesh Tripathi. "Data Provenance and Access Control Rules for Ownership Transfer Using Blockchain." International Journal of Information Security and Privacy 15, no. 2 (April 2021): 87–112. http://dx.doi.org/10.4018/ijisp.2021040105.

Full text
Abstract:
Provenance provides information about how data came to be in its present state. Recently, many critical applications are working with data provenance and provenance security. However, the main challenges in provenance-based applications are storage representation, provenance security, and centralized approach. In this paper, the authors propose a secure trading framework which is based on the techniques of blockchain that includes various features like decentralization, immutability, and integrity in order to solve the trust crisis in centralized provenance-based system. To overcome the storage representation of data provenance, they propose JavaScript object notation (JSON) structure. To improve the provenance security, they propose the access control language (ACL) rule. To implement the JSON structure and ACL rules, permissioned blockchain based tool “Hyperledger Composer” has been used. They demonstrate that their framework minimizes the execution time when the number of transaction increases in terms of storage representation of data provenance and security.
APA, Harvard, Vancouver, ISO, and other styles
10

Leng, Yonglin, Zhikui Chen, and Yueming Hu. "STLIS: A Scalable Two-Level Index Scheme for Big Data in IoT." Mobile Information Systems 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/5341797.

Full text
Abstract:
The rapid development of the Internet of Things causes the dramatic growth of data, which poses an important challenge on the storage and quick retrieval of big data. As an effective representation model, RDF receives the most attention. More and more storage and index schemes have been developed for RDF model. For the large-scale RDF data, most of them suffer from a large number of self-joins, high storage cost, and many intermediate results. In this paper, we propose a scalable two-level index scheme (STLIS) for RDF data. In the first level, we devise a compressed path template tree (CPTT) index based on S-tree to retrieve the candidate sets of full path. In the second level, we create a hierarchical edge index (HEI) and a node-predicate (NP) index to accelerate the match. Extensive experiments are executed on two representative RDF benchmarks and one real RDF dataset in IoT by comparison with three representative index schemes, that is, RDF-3X, Bitmat, and TripleBit. Results demonstrate that our proposed scheme can respond to the complex query in real time and save much storage space compared with RDF-3X and Bitmat.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data storage representation"

1

Ugail, Hassan, and Eyad Elyan. "Efficient 3D data representation for biometric applications." IOS Press, 2007. http://hdl.handle.net/10454/2683.

Full text
Abstract:
Yes
An important issue in many of today's biometric applications is the development of efficient and accurate techniques for representing related 3D data. Such data is often available through the process of digitization of complex geometric objects which are of importance to biometric applications. For example, in the area of 3D face recognition a digital point cloud of data corresponding to a given face is usually provided by a 3D digital scanner. For efficient data storage and for identification/authentication in a timely fashion such data requires to be represented using a few parameters or variables which are meaningful. Here we show how mathematical techniques based on Partial Differential Equations (PDEs) can be utilized to represent complex 3D data where the data can be parameterized in an efficient way. For example, in the case of a 3D face we show how it can be represented using PDEs whereby a handful of key facial parameters can be identified for efficient storage and verification.
APA, Harvard, Vancouver, ISO, and other styles
2

Folmer, Brennan Thomas. "Metadata storage for file management systems data storage and representation techniques for a file management system /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
4

VanCalcar, Jenny E. (Jenny Elizabeth). "Collection and representation of GIS data to aid household water treatment and safe storage technology implementation in the northern region of Ghana." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34583.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2006.
Includes bibliographical references (leaves 46-51).
In 2005, a start-up social business called Pure Home Water (PHW) was begun in Ghana to promote and sell household water treatment and safe storage (HWTS) technologies. The original aim of the company was to offer a variety of products, allowing customers to choose the technology which best fit their individual needs. This differed from the typical implementation of HWTS promoters to date, in which an organization often distributes a single technology for the population to use. Instead, Pure Home Water wanted to give users a choice. PHW is also unique because they are attempting to sell their products without any subsidy. The goal is to create a sustainable business that will both bring better quality water to the population and be financially self-supporting. Because the company is new, a need existed to gather data on the demographic, health, and water and sanitation infrastructure within the region. Due to the geographic nature of the project, it was decided that a Geographic Information System (GIS) would be the best tool to store, analyze and represent the data.
(cont.) The system could be used to help plan relevant business strategies, and maps could be created to visually communicate important information among the Pure Home Water team and other interested parties. The final database did achieve the goal of collecting and bringing together important regional information in a form hopefully useful to PHW, future MIT teams and others. However, the use of the database for long-term planning is currently too advanced for the small company.
by Jenny E. VanCalcar.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Elyan, Eyad, and Hassan Ugail. "Reconstruction of 3D human facial images using partial differential equations." Academy Publisher, 2007. http://hdl.handle.net/10454/2644.

Full text
Abstract:
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Cheng-Hung. "Application for data mining in manufacturing databases." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178653424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Munalula, Themba. "Measuring the applicability of Open Data Standards to a single distributed organisation: an application to the COMESA Secretariat." Thesis, University of Cape Town, 2008. http://pubs.cs.uct.ac.za/archive/00000461/.

Full text
Abstract:
Open data standardization has many known benefits, including the availability of tools for standard encoding formats, interoperability among systems and long term preservation of data. Mark-up languages and their use on the World Wide Web have implied further ease for data sharing. The Extensible Markup Language (XML), in particular, has succeeded due to its simplicity and ease of use. Its primary purpose is to facilitate the sharing of data across different information systems, particularly systems connected via the Internet. Whether open and standardized or not, organizations generate data daily. Offline exchange of documents and data is undertaken using existing formats that are typically defined by the organizations that generate the data in the documents. With the Internet, the realization of data exchange has had a direct implication on the need for interoperability and comparability. As much as standardization is the accepted approach for online data exchange, little is understood about how a specific organization’s data “fits” a given data standard. This dissertation develops data metrics that represent the extent to which data standards can be applied to an organization’s data. The research identified key issues that affect data interoperability or the feasibility of a move towards interoperability. This research tested the unwritten rule that organizational setups tend to regard and design data requirements more from internal needs than interoperability needs. Essentially, by generating metrics that affect a number of data attributes, the research quantified the extent of the gap that exists between organizational data and data standards. Key data attributes, i.e. completeness, concise representation, relevance and complexity, were selected and used as the basis for metric generation. Additional to the generation of attribute-based metrics, hybrid metrics representing a measure of the “goodness of fit” of the source data to standard data were generated. Regarding the completeness attribute, it was found that most Common Market for Eastern and Southern Africa (COMESA) head office data clusters had lower than desired metrics to match the gap highlighted above. The same applied to the concise representation attribute. Most data clusters had more concise representation for the COMESA data than the data standard. The complexity metrics generated confirmed the fact that the number of data elements is a key determinant in any move towards the adoption of data standards. This fact was also borne out by the magnitude of the hybrid metrics which to some extent depended on the complexity metrics. An additional contribution of the research was the inclusion of expert users’ weights to the data elements and recalculation of all metrics. A comparison with the unweighted metrics yielded a mixed picture. Among the completeness metrics and for the data retention rate in particular, increases were recorded for data clusters for which greater weight was allocated to mapped elements than to those that were not mapped. The same applied to the relative elements ratio. The complexity metrics showed general declines when user-weighted elements were used in the computation as opposed to the unweighted elements. This again was due to the fact that these metrics are dependent on the number of elements. Hence for the former case, the weights were evenly distributed while for the latter case, some elements were given lower weights by the expert users, hence leading to an overall decline in the metric. A number of implications emerged for COMESA. COMESA would have to determine the extent to which its source data rely on data sources for which international standards are being promoted. Secondly, an inventory of users and collectors of the data COMESA uses is necessary in order to determine who would be the beneficiary of a standards-based information system. Thirdly, and from an organizational perspective, COMESA needs to designate a team to guide the process of creation of such a standards-based information system. Lastly there is need for involvement in consortia that are responsible for these data standards. This has an implication on organizational resources. In totality, this research provided a methodology for determination of the feasibility of a move towards standardization and hence makes it possible to answer the critical first stage questions such a move begs answers to.
APA, Harvard, Vancouver, ISO, and other styles
8

Munyaradzi, Ngoni. "Transcription of the Bleek and Lloyd Collection using the Bossa Volunteer Thinking Framework." Thesis, University of Cape Town, 2013. http://pubs.cs.uct.ac.za/archive/00000913/.

Full text
Abstract:
The digital Bleek and Lloyd Collection is a rare collection that contains artwork, notebooks and dictionaries of the earliest habitants of Southern Africa. Previous attempts have been made to recognize the complex text in the notebooks using machine learning techniques, but due to the complexity of the manuscripts the recognition accuracy was low. In this research, a crowdsourcing based method is proposed to transcribe the historical handwritten manuscripts, where volunteers transcribe the notebooks online. An online crowdsourcing transcription tool was developed and deployed. Experiments were conducted to determine the quality of transcriptions and accuracy of the volunteers compared with a gold standard. The results show that volunteers are able to produce reliable transcriptions of high quality. The inter-transcriber agreement is 80% for |Xam text and 95% for English text. When the |Xam text transcriptions produced by the volunteers are compared with the gold standard, the volunteers achieve an average accuracy of 69.69%. Findings show that there exists a positive linear correlation between the inter-transcriber agreement and the accuracy of transcriptions. The user survey revealed that volunteers found the transcription process enjoyable, though it was difficult. Results indicate that volunteer thinking can be used to crowdsource intellectually-intensive tasks in digital libraries like transcription of handwritten manuscripts. Volunteer thinking outperforms machine learning techniques at the task of transcribing notebooks from the Bleek and Lloyd Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Yue. "Data Representation for Efficient and Reliable Storage in Flash Memories." Thesis, 2013. http://hdl.handle.net/1969.1/149536.

Full text
Abstract:
Recent years have witnessed a proliferation of flash memories as an emerging storage technology with wide applications in many important areas. Like magnetic recording and optimal recording, flash memories have their own distinct properties and usage environment, which introduce very interesting new challenges for data storage. They include accurate programming without overshooting, error correction, reliable writing data to flash memories under low-voltages and file recovery for flash memories. Solutions to these problems can significantly improve the longevity and performance of the storage systems based on flash memories. In this work, we explore several new data representation techniques for efficient and reliable data storage in flash memories. First, we present a new data representation scheme—rank modulation with multiplicity —to eliminate the overshooting and charge leakage problems for flash memories. Next, we study the Half-Wits — stochastic behavior of writing data to embedded flash memories at voltages lower than recommended by a microcontroller’s specifications—and propose three software- only algorithms that enable reliable storage at low voltages without modifying hard- ware, which can reduce energy consumption by 30%. Then, we address the file erasures recovery problem in flash memories. Instead of only using traditional error- correcting codes, we design a new content-assisted decoder (CAD) to recover text files. The new CAD can be combined with the existing error-correcting codes and the experiment results show CAD outperforms the traditional error-correcting codes.
APA, Harvard, Vancouver, ISO, and other styles
10

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30873.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data storage representation"

1

Thompson, Rodney James. Towards a rigorous logic for spatial data representation. Delft: Netherlands Geodetic Commission, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ying. Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation. Boston, MA: Springer US, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

David, Hutchison. Transactions on Computational Science V: Special Issue on Cognitive Knowledge Representation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kutyniok, Gitta. Shearlets: Multiscale Analysis for Multivariate Data. Boston: Birkhäuser Boston, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Crestani, Fabio. Information Retrieval: Uncertainty and Logics: Advanced Models for the Representation and Retrieval of Information. Boston, MA: Springer US, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Emilio, Maurizio Di Paolo. Data Acquisition Systems: From Fundamentals to Applied Design. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schweighofer, Erich. Legal knowledge representation: Automatic text analysis in public international and European law. The Hague: Kluwer Law International, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Josef, Küng, Wagner Roland, and SpringerLink (Online service), eds. Transactions on Large-Scale Data- and Knowledge-Centered Systems V. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Riaño, David. Knowledge Representation for Health-Care: ECAI 2010 Workshop KR4HC 2010, Lisbon, Portugal, August 17, 2010, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Armando, Escalante, and SpringerLink (Online service), eds. Handbook of Data Intensive Computing. New York, NY: Springer Science+Business Media, LLC, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data storage representation"

1

Smith, William A. P. "3D Data Representation, Storage and Processing." In 3D Imaging, Analysis and Applications, 265–316. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brazma, Alvis, Ugis Sarkans, Alan Robinson, Jaak Vilo, Martin Vingron, Jörg Hoheisel, and Kurt Fellenberg. "Microarray Data Representation, Annotation and Storage." In Advances in Biochemical Engineering/Biotechnology, 113–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45713-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Terletskyi, Dmytro. "Object-Oriented Knowledge Representation and Data Storage Using Inhomogeneous Classes." In Communications in Computer and Information Science, 48–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67642-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hoque, Abu Sayed M. Latiful. "Storage and Querying of High Dimensional Sparsely Populated Data in Compressed Representation." In Lecture Notes in Computer Science, 418–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36087-5_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Purbey, Suniti, and Brijesh Khandelwal. "Analyzing Frameworks for IoT Data Storage, Representation and Analysis: A Statistical Perspective." In Lecture Notes in Networks and Systems, 472–88. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84760-9_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Paul, Razan, and Abu Sayed Md Latiful Hoque. "Optimized Column-Oriented Model: A Storage and Search Efficient Representation of Medical Data." In Information Technology in Bio- and Medical Informatics, ITBAM 2010, 118–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15020-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Quadrio, Bruno, Fabrizio Bramerini, Sergio Castenetto, and Giuseppe Naso. "A New Step for Seismic Microzonation Studies in Italy: Standards for Data Storage and Representation." In Engineering Geology for Society and Territory - Volume 5, 1169–72. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09048-1_223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wycislik, Lukasz. "Storage Efficiency of LOB Structures for Free RDBMSs on Example of PostgreSQL and Oracle Platforms." In Beyond Databases, Architectures and Structures. Towards Efficient Solutions for Data Analysis and Knowledge Representation, 212–23. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58274-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tolovski, Ilin, Sašo Džeroski, and Panče Panov. "Semantic Annotation of Predictive Modelling Experiments." In Discovery Science, 124–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_9.

Full text
Abstract:
Abstract In this paper, we address the task of representation, semantic annotation, storage, and querying of predictive modelling experiments. We introduce OntoExp, an OntoDM module which gives a more granular representation of a predictive modeling experiment and enables annotation of the experiment’s provenance, algorithm implementations, parameter settings and output metrics. This module is incorporated in SemanticHub, an online system that allows execution, annotation, storage and querying of predictive modeling experiments. The system offers two different user scenarios. The users can either define their own experiment and execute it, or they can browse the repository of completed experimental workflows across different predictive modelling tasks. Here, we showcase the capabilities of the system with executing multi-target regression experiment on a water quality prediction dataset using the Clus software. The system and created repositories are evaluated based on the FAIR data stewardship guidelines. The evaluation shows that OntoExp and SemanticHub provide the infrastructure needed for semantic annotation, execution, storage, and querying of the experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Paradies, Marcus, and Hannes Voigt. "Graph Representations and Storage." In Encyclopedia of Big Data Technologies, 1–7. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_211-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data storage representation"

1

Paul, Razan, and Abu Sayed Md Latiful Hoque. "A storage & search efficient representation of medical data." In 2010 International Conference on Bioinformatics and Biomedical Technology. IEEE, 2010. http://dx.doi.org/10.1109/icbbt.2010.5478926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Yuan, Scott Klasky, Weikuan Yu, Bin Wang, Hasan Abbasi, Norbert Podhorszki, and Ray Grout. "DynaM: Dynamic Multiresolution Data Representation for Large-Scale Scientific Analysis." In 2013 IEEE 8th International Conference on Networking, Architecture, and Storage (NAS). IEEE, 2013. http://dx.doi.org/10.1109/nas.2013.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cevallos, Yesenia, Luis Tello-Oquendo, Deysi Inca, Nicolay Samaniego, Ivone Santillán, Amin Zadeh Shirazi, and Guillermo A. Gomez. "On the efficient digital code representation in DNA-based data storage." In NANOCOM '20: The Seventh Annual ACM International Conference on Nanoscale Computing and Communication. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411295.3411314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klisura, Ðorže. "Embedding Non-planar Graphs: Storage and Representation." In 7th Student Computer Science Research Conference. University of Maribor Press, 2021. http://dx.doi.org/10.18690/978-961-286-516-0.13.

Full text
Abstract:
In this paper, we propose a convention for repre-senting non-planar graphs and their least-crossing embeddings in a canonical way. We achieve this by using state-of-the-art tools such as canonical labelling of graphs, Nauty’s Graph6 string and combinatorial representations for planar graphs. To the best of our knowledge, this has not been done before. Besides, we implement the men-tioned procedure in a SageMath language and compute embeddings for certain classes of cubic, vertex-transitive and general graphs. Our main contribution is an extension of one of the graph data sets hosted on MathDataHub, and towards extending the SageMath codebase.
APA, Harvard, Vancouver, ISO, and other styles
5

Farias, Humberto, Mauricio Solar, and Camilo Núñez. "Tensor representation, constrain (storage) and processing of multidimensional astronomical data over intense computing support." In Software and Cyberinfrastructure for Astronomy V, edited by Juan C. Guzman and Jorge Ibsen. SPIE, 2018. http://dx.doi.org/10.1117/12.2313222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kern, Daniel, and Anna Thornton. "Structured Indexing of Process Capability Data." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/dfm-34180.

Full text
Abstract:
Process capability data can aid a design through ensuring the part tolerances are achievable with the current manufacturing capability. Many companies want to store process capability data in databases to make it available to all engineers. The success of a process capability database is highly dependent on the design of its structure and a method to index data for ease of input and retrieval. In this paper, the authors describe a new method of representing characteristics of a manufactured component using the attributes of feature, geometry, material, and process. This representation enables better storage and retrieval of process capability data. In addition, the authors describe a method for rapidly and robustly indexing components’ characteristics for entry into a process capability database.
APA, Harvard, Vancouver, ISO, and other styles
7

Syvertson, D. I. "Storage Data Collection, Representation, and Analysis and How They Can Interface With Planning, Remedial Work, and Operations." In SPE Gas Technology Symposium. Society of Petroleum Engineers, 1989. http://dx.doi.org/10.2118/19087-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hauenstein, Jacob. "A Method to Compactly Store Scrambled Data Alongside Standard Unscrambled Disc Images of CD-ROMs." In 12th International Conference on Computer Science and Information Technology (CCSIT 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121318.

Full text
Abstract:
When archiving and preserving CD-ROM discs, data sectors are often read in a so-called “scrambled mode” before being unscrambled and further processed into a standard disc image. Processing of scrambled data into a standard disc image is potentially lossy, but standard disc images exhibit greater software compatibility and usability compared to scrambled data. Consequently, for preservation purposes, it is often advantageous to store both the scrambled data and the corresponding standard disc image, resulting in high storage demands. Here, a method that enables compact storage of scrambled data alongside the corresponding (unscrambled) standard CD-ROM disc image is introduced. The method produces a compact representation of the scrambled data that is derived from the standard disc image. The method allows for storage of the standard unscrambled disc image in unmodified form, easy reconstruction of the scrambled data, and a substantial space savings compared standard data compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Viscaino - Quito, Andres, and Luis Serpa-Andrade. "Development of a data collection system in the cloud as a storage method." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002529.

Full text
Abstract:
Data being a quantitative or qualitative representation by which a symbolic or numerical value is indicated or represented, are used in various fields for which data collection processes are performed, based on the proposed objectives. Therefore, a data collection process was developed through a mobile application and web services, which will serve to evaluate and analyze with the help of an expert in the area, the graphomotor skills of children aged 6 to 8 years with motor disabilities. The results obtained at this stage will generate a large amount of information that will be used in the future for the definition of play activities that will complement the skills or deficiencies derived once the evaluation stage is completed, it will also be possible to implement or improve a completer and more robust database that will serve not only for graphomotor but also for different areas that work with these children.
APA, Harvard, Vancouver, ISO, and other styles
10

Volovich, K., S. Denisov, and V. Kondrashev. "DATA PROCESSING NETWORK ARCHITECTURE FOR PARALLEL COMPUTING IN A HIGH-PERFORMANCE COMPLEX FOR MATERIALS SCIENCE PROBLEMS." In Mathematical modeling in materials science of electronic component. LCC MAKS Press, 2022. http://dx.doi.org/10.29003/m3061.mmmsec-2022/30-36.

Full text
Abstract:
The paper considers the architecture of building parallel data storage systems in high-performance computing systems. The features of data representation in parallel file systems for applied problems of materials science are analyzed.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data storage representation"

1

Mehmood, Hamid, Surya Karthik Mukkavilli, Ingmar Weber, Atsushi Koshio, Chinaporn Meechaiya, Thanapon Piman, Kenneth Mubea, Cecilia Tortajada, Kimberly Mahadeo, and Danielle Liao. Strategic Foresight to Applications of Artificial Intelligence to Achieve Water-related Sustainable Development Goals. United Nations University Institute for Water, Environment and Health, April 2020. http://dx.doi.org/10.53328/lotc2968.

Full text
Abstract:
The report recommends that: 1) Policymakers should conduct holistic assessments of social, economic, and cultural factors before AI adoption in the water sector, as prospective applications of AI are case- specific. It is also important to conduct baseline studies to measure the implementation capacity, return on investment, and impact of intervention. 2) To ensure positive development outcomes, policies regarding the use of AI for water-related challenges should be coupled with capacity and infrastructure development policies. Capacity development policies need to address the AI and Information and Communications Technology (ICT) needs for the AI-related skill development of all water-related stakeholders. Infrastructure development policies should address the underlying requirements of computation, energy, data generation, and storage. The sequencing of these policies is critical. 3) To mitigate the predicted job displacement that will accompany AI-led innovation in the water sector, policies should direct investments towards enabling a skilled workforce by developing water sector-related education at all levels. This skilled workforce should be strategically placed to offset dependency on the private sector. 4) Water-related challenges are cross-cutting running from grassroots to the global level and require an understanding of the water ecosystem. It is important for countries connected by major rivers and watersheds to collaborate in developing policies that advance the use of AI to address common water-related challenges. 5) A council or agency with representation from all stakeholders should be constituted at the national level, to allow for the successful adoption of AI by water agencies. This council or agency should be tasked with the development of policies, guidelines, and codes of conduct for the adoption of AI in the water-sector. These key policy recommendations can be used as primary guidelines for the development of strategies and plans to use AI to help achieve water-related SDGs.
APA, Harvard, Vancouver, ISO, and other styles
2

McPhedran, R., K. Patel, B. Toombs, P. Menon, M. Patel, J. Disson, K. Porter, A. John, and A. Rayner. Food allergen communication in businesses feasibility trial. Food Standards Agency, March 2021. http://dx.doi.org/10.46756/sci.fsa.tpf160.

Full text
Abstract:
Background: Clear allergen communication in food business operators (FBOs) has been shown to have a positive impact on customers’ perceptions of businesses (Barnett et al., 2013). However, the precise size and nature of this effect is not known: there is a paucity of quantitative evidence in this area, particularly in the form of randomised controlled trials (RCTs). The Food Standards Agency (FSA), in collaboration with Kantar’s Behavioural Practice, conducted a feasibility trial to investigate whether a randomised cluster trial – involving the proactive communication of allergen information at the point of sale in FBOs – is feasible in the United Kingdom (UK). Objectives: The trial sought to establish: ease of recruitments of businesses into trials; customer response rates for in-store outcome surveys; fidelity of intervention delivery by FBO staff; sensitivity of outcome survey measures to change; and appropriateness of the chosen analytical approach. Method: Following a recruitment phase – in which one of fourteen multinational FBOs was successfully recruited – the execution of the feasibility trial involved a quasi-randomised matched-pairs clustered experiment. Each of the FBO’s ten participating branches underwent pair-wise matching, with similarity of branches judged according to four criteria: Food Hygiene Rating Scheme (FHRS) score, average weekly footfall, number of staff and customer satisfaction rating. The allocation ratio for this trial was 1:1: one branch in each pair was assigned to the treatment group by a representative from the FBO, while the other continued to operate in accordance with their standard operating procedure. As a business-based feasibility trial, customers at participating branches throughout the fieldwork period were automatically enrolled in the trial. The trial was single-blind: customers at treatment branches were not aware that they were receiving an intervention. All customers who visited participating branches throughout the fieldwork period were asked to complete a short in-store survey on a tablet affixed in branches. This survey contained four outcome measures which operationalised customers’: perceptions of food safety in the FBO; trust in the FBO; self-reported confidence to ask for allergen information in future visits; and overall satisfaction with their visit. Results: Fieldwork was conducted from the 3 – 20 March 2020, with cessation occurring prematurely due to the closure of outlets following the proliferation of COVID-19. n=177 participants took part in the trial across the ten branches; however, response rates (which ranged between 0.1 - 0.8%) were likely also adversely affected by COVID-19. Intervention fidelity was an issue in this study: while compliance with delivery of the intervention was relatively high in treatment branches (78.9%), erroneous delivery in control branches was also common (46.2%). Survey data were analysed using random-intercept multilevel linear regression models (due to the nesting of customers within branches). Despite the trial’s modest sample size, there was some evidence to suggest that the intervention had a positive effect for those suffering from allergies/intolerances for the ‘trust’ (β = 1.288, p<0.01) and ‘satisfaction’ (β = 0.945, p<0.01) outcome variables. Due to singularity within the fitted linear models, hierarchical Bayes models were used to corroborate the size of these interactions. Conclusions: The results of this trial suggest that a fully powered clustered RCT would likely be feasible in the UK. In this case, the primary challenge in the execution of the trial was the recruitment of FBOs: despite high levels of initial interest from four chains, only one took part. However, it is likely that the proliferation of COVID-19 adversely impacted chain participation – two other FBOs withdrew during branch eligibility assessment and selection, citing COVID-19 as a barrier. COVID-19 also likely lowered the on-site survey response rate: a significant negative Pearson correlation was observed between daily survey completions and COVID-19 cases in the UK, highlighting a likely relationship between the two. Limitations: The trial was quasi-random: selection of branches, pair matching and allocation to treatment/control groups were not systematically conducted. These processes were undertaken by a representative from the FBO’s Safety and Quality Assurance team (with oversight from Kantar representatives on pair matching), as a result of the chain’s internal operational restrictions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography