Добірка наукової літератури з теми "Data Storage Representations"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Data Storage Representations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Data Storage Representations":

1

Gutsche, Oliver, and Igor Mandrichenko. "Striped Data Analysis Framework." EPJ Web of Conferences 245 (2020): 06042. http://dx.doi.org/10.1051/epjconf/202024506042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approach allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.
2

Lee, Sang Hun, and Kunwoo Lee. "Partial Entity Structure: A Compact Boundary Representation for Non-Manifold Geometric Modeling." Journal of Computing and Information Science in Engineering 1, no. 4 (November 1, 2001): 356–65. http://dx.doi.org/10.1115/1.1433486.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Non-manifold boundary representations have become very popular in recent years and various representation schemes have been proposed, as they represent a wider range of objects, for various applications, than conventional manifold representations. As these schemes mainly focus on describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy storage space redundantly, although they are very efficient in answering queries on topological adjacency relationships. To solve this problem, in this paper, we propose a compact as well as fast non-manifold boundary representation, called the partial entity structure. This representation reduces the storage size to half that of the radial edge structure, which is one of the most popular and efficient of existing data structures, while allowing full topological adjacency relationships to be derived without loss of efficiency. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes.
3

Gertsiy, O. "COMPARATIVE ANALYSIS OF COMPACT METHODS REPRESENTATIONS OF GRAPHIC INFORMATION." Collection of scientific works of the State University of Infrastructure and Technologies series "Transport Systems and Technologies" 1, no. 37 (June 29, 2021): 130–43. http://dx.doi.org/10.32703/2617-9040-2021-37-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The main characteristics of graphic information compression methods with losses and without losses (RLE, LZW, Huffman's method, DEFLATE, JBIG, JPEG, JPEG 2000, Lossless JPEG, fractal and Wawelet) are analyzed in the article. Effective transmission and storage of images in railway communication systems is an important task now. Because large images require large storage resources. This task has become very important in recent years, as the problems of information transmission by telecommunication channels of the transport infrastructure have become urgent. There is also a great need for video conferencing, where the task is to effectively compress video data - because the greater the amount of data, the greater the cost of transmitting information, respectively. Therefore, the use of image compression methods that reduce the file size is the solution to this task. The study highlights the advantages and disadvantages of compression methods. The comparative analysis the basic possibilities of compression methods of graphic information is carried out. The relevance lies in the efficient transfer and storage of graphical information, as big data requires large resources for storage. The practical significance lies in solving the problem of effectively reducing the data size by applying known compression methods.
4

Jackson, T. R., W. Cho, N. M. Patrikalakis, and E. M. Sachs. "Memory Analysis of Solid Model Representations for Heterogeneous Objects." Journal of Computing and Information Science in Engineering 2, no. 1 (March 1, 2002): 1–10. http://dx.doi.org/10.1115/1.1476380.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Methods to represent and exchange parts consisting of Functionally Graded Material (FGM) for Solid Freeform Fabrication (SFF) with Local Composition Control (LCC) are evaluated based on their memory requirements. Data structures for representing FGM objects as heterogeneous models are described and analyzed, including a voxel-based structure, finite-element mesh-based approach, and the extension of the Radial-Edge and Cell-Tuple-Graph data structures with Material Domains representing spatially varying composition properties. The storage cost for each data structure is derived in terms of the number of instances of each of its fundamental classes required to represent an FGM object. In order to determine the optimal data structure, the storage cost associated with each data structure is calculated for several hypothetical models. Limitations of these representation schemes are discussed and directions for future research also recommended.
5

Frenkel, Michael, Robert D. Chiroco, Vladimir Diky, Qian Dong, Kenneth N. Marsh, John H. Dymond, William A. Wakeham, Stephen E. Stein, Erich Königsberger, and Anthony R. H. Goodwin. "XML-based IUPAC standard for experimental, predicted, and critically evaluated thermodynamic property data storage and capture (ThermoML) (IUPAC Recommendations 2006)." Pure and Applied Chemistry 78, no. 3 (January 1, 2006): 541–612. http://dx.doi.org/10.1351/pac200678030541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ThermoML is an Extensible Markup Language (XML)-based new IUPAC standard for storage and exchange of experimental, predicted, and critically evaluated thermophysical and thermochemical property data. The basic principles, scope, and description of all structural elements of ThermoML are discussed. ThermoML covers essentially all thermodynamic and transport property data (more than 120 properties) for pure compounds, multicomponent mixtures, and chemical reactions (including change-of-state and equilibrium reactions). Representations of all quantities related to the expression of uncertainty in ThermoML conform to the Guide to the Expression of Uncertainty in Measurement (GUM). The ThermoMLEquation schema for representation of fitted equations with ThermoML is also described and provided as supporting information together with specific formulations for several equations commonly used in the representation of thermodynamic and thermophysical properties. The role of ThermoML in global data communication processes is discussed. The text of a variety of data files (use cases) illustrating the ThermoML format for pure compounds, mixtures, and chemical reactions, as well as the complete ThermoML schema text, are provided as supporting information.
6

ALHONIEMI, ESA. "Simplified time series representations for efficient analysis of industrial process data." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 17, no. 2 (May 2003): 103–14. http://dx.doi.org/10.1017/s0890060403172010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The data storage capacities of modern process automation systems have grown rapidly. Nowadays, the systems are able to frequently carry out even hundreds of measurements in parallel and store them in databases. However, these data are still rarely used in the analysis of processes. In this article, preparation of the raw data for further analysis is considered using feature extraction from signals by piecewise linear modeling. Prior to modeling, a preprocessing phase that removes some artifacts from the data is suggested. Because optimal models are computationally infeasible, fast heuristic algorithms must be utilized. Outlines for the optimal and some fast heuristic algorithms with modifications required by the preprocessing are given. In order to illustrate utilization of the features, a process diagnostics framework is presented. Among a large number of signals, the procedure finds the ones that best explain the observed short-term fluctuations in one signal. In the experiments, the piecewise linear modeling algorithms are compared using a massive data set from an operational paper machine. The use of piecewise linear representations in the analysis of changes in one real process measurement signal is demonstrated.
7

Hernaez, Mikel, Dmitri Pavlichin, Tsachy Weissman, and Idoia Ochoa. "Genomic Data Compression." Annual Review of Biomedical Data Science 2, no. 1 (July 20, 2019): 19–37. http://dx.doi.org/10.1146/annurev-biodatasci-072018-021229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, there has been growing interest in genome sequencing, driven by advances in sequencing technology, in terms of both efficiency and affordability. These developments have allowed many to envision whole-genome sequencing as an invaluable tool for both personalized medical care and public health. As a result, increasingly large and ubiquitous genomic data sets are being generated. This poses a significant challenge for the storage and transmission of these data. Already, it is more expensive to store genomic data for a decade than it is to obtain the data in the first place. This situation calls for efficient representations of genomic information. In this review, we emphasize the need for designing specialized compressors tailored to genomic data and describe the main solutions already proposed. We also give general guidelines for storing these data and conclude with our thoughts on the future of genomic formats and compressors.
8

Rachkovskij, Dmitri A., and Ernst M. Kussul. "Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning." Neural Computation 13, no. 2 (February 2001): 411–52. http://dx.doi.org/10.1162/089976601300014592.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's holographic reduced representations and Kanerva's binary spatter codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this article we consider procedures of the context-dependent thinning developed for representation of complex hierarchical items in the architecture of associative-projective neural networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, context-dependent thinning preserves the same low density (or sparseness) of the bound codevector for a varied number of component codevectors. Besides, a bound codevector is similar not only to another one with similar component codevectors (as in other schemes) but also to the component codevectors themselves. This allows the similarity of structures to be estimated by the overlap of their codevectors, without retrieval of the component codevectors. This also allows easy retrieval of the component codevectors. Examples of algorithmic and neural network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role filler and predicate arguments schemes, trees, and directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional artificial intelligence as well as to the localist and microfeature-based connectionist representations.
9

De Masi, A. "DIGITAL DOCUMENTATION’S ONTOLOGY: CONTEMPORARY DIGITAL REPRESENTATIONS AS EXPRESS AND SHARED MODELS OF REGENERATION AND RESILIENCE IN THE PLATFORM BIM/CONTAMINATED HYBRID REPRESENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-1-2021 (August 28, 2021): 189–97. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-1-2021-189-2021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The study illustrates a university research project of “Digital Documentation’s Ontology”, to be activated with other universities, of an Platform (P) – Building Information Modeling (BIM) articulated on a Contaminated Hybrid Representation (diversification of graphic models); the latter, able to foresee categories of Multi-Representations that interact with each other for to favour several representations, adapted to a different information density in the digital multi-scale production, is intended as platform (grid of data and information at different scales, semantic structure from web content, data and information storage database, archive, model and form of knowledge and ontological representation shared) of: inclusive digital ecosystem development; digital regenerative synergies of representation with adaptable and resilient content in hybrid or semi-hybrid Cloud environments; phenomenological reading of the changing complexity of environmental reality; hub solution of knowledge and simulcast description of information of Cultural Heritage (CH); multimedia itineraries to enhance participatory and attractive processes for the community; factor of cohesion and sociality, an engine of local development. The methodology of P-BIM/CHR is articulated on the following ontologies: Interpretative and Codification, Morphology, Lexicon, Syntax, Metamorphosis, Metadata in the participatory system, Regeneration, Interaction and Sharing. From the point of view the results and conclusion the study allowed to highlight: a) Digital Regenerative synergies of representation; b) Smart CH Model for an interconnection of systems and services within a complex set of relationships.
10

Tan, Xiaojing, Ming Zou, and Xiqin He. "Target Recognition in SAR Images Based on Multiresolution Representations with 2D Canonical Correlation Analysis." Scientific Programming 2020 (February 24, 2020): 1–9. http://dx.doi.org/10.1155/2020/7380790.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study proposes a synthetic aperture radar (SAR) target-recognition method based on the fused features from the multiresolution representations by 2D canonical correlation analysis (2DCCA). The multiresolution representations were demonstrated to be more discriminative than the solely original image. So, the joint classification of the multiresolution representations is beneficial to the enhancement of SAR target recognition performance. 2DCCA is capable of exploiting the inner correlations of the multiresolution representations while significantly reducing the redundancy. Therefore, the fused features can effectively convey the discrimination capability of the multiresolution representations while relieving the storage and computational burdens caused by the original high dimension. In the classification stage, the sparse representation-based classification (SRC) is employed to classify the fused features. SRC is an effective and robust classifier, which has been extensively validated in the previous works. The moving and stationary target acquisition and recognition (MSTAR) data set is employed to evaluate the proposed method. According to the experimental results, the proposed method could achieve a high recognition rate of 97.63% for the 10 classes of targets under the standard operating condition (SOC). Under the extended operating conditions (EOC) like configuration variance, depression angle variance, and the robustness of the proposed method are also quantitively validated. In comparison with some other SAR target recognition methods, the superiority of the proposed method can be effectively demonstrated.

Дисертації з теми "Data Storage Representations":

1

Munalula, Themba. "Measuring the applicability of Open Data Standards to a single distributed organisation: an application to the COMESA Secretariat." Thesis, University of Cape Town, 2008. http://pubs.cs.uct.ac.za/archive/00000461/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Open data standardization has many known benefits, including the availability of tools for standard encoding formats, interoperability among systems and long term preservation of data. Mark-up languages and their use on the World Wide Web have implied further ease for data sharing. The Extensible Markup Language (XML), in particular, has succeeded due to its simplicity and ease of use. Its primary purpose is to facilitate the sharing of data across different information systems, particularly systems connected via the Internet. Whether open and standardized or not, organizations generate data daily. Offline exchange of documents and data is undertaken using existing formats that are typically defined by the organizations that generate the data in the documents. With the Internet, the realization of data exchange has had a direct implication on the need for interoperability and comparability. As much as standardization is the accepted approach for online data exchange, little is understood about how a specific organization’s data “fits” a given data standard. This dissertation develops data metrics that represent the extent to which data standards can be applied to an organization’s data. The research identified key issues that affect data interoperability or the feasibility of a move towards interoperability. This research tested the unwritten rule that organizational setups tend to regard and design data requirements more from internal needs than interoperability needs. Essentially, by generating metrics that affect a number of data attributes, the research quantified the extent of the gap that exists between organizational data and data standards. Key data attributes, i.e. completeness, concise representation, relevance and complexity, were selected and used as the basis for metric generation. Additional to the generation of attribute-based metrics, hybrid metrics representing a measure of the “goodness of fit” of the source data to standard data were generated. Regarding the completeness attribute, it was found that most Common Market for Eastern and Southern Africa (COMESA) head office data clusters had lower than desired metrics to match the gap highlighted above. The same applied to the concise representation attribute. Most data clusters had more concise representation for the COMESA data than the data standard. The complexity metrics generated confirmed the fact that the number of data elements is a key determinant in any move towards the adoption of data standards. This fact was also borne out by the magnitude of the hybrid metrics which to some extent depended on the complexity metrics. An additional contribution of the research was the inclusion of expert users’ weights to the data elements and recalculation of all metrics. A comparison with the unweighted metrics yielded a mixed picture. Among the completeness metrics and for the data retention rate in particular, increases were recorded for data clusters for which greater weight was allocated to mapped elements than to those that were not mapped. The same applied to the relative elements ratio. The complexity metrics showed general declines when user-weighted elements were used in the computation as opposed to the unweighted elements. This again was due to the fact that these metrics are dependent on the number of elements. Hence for the former case, the weights were evenly distributed while for the latter case, some elements were given lower weights by the expert users, hence leading to an overall decline in the metric. A number of implications emerged for COMESA. COMESA would have to determine the extent to which its source data rely on data sources for which international standards are being promoted. Secondly, an inventory of users and collectors of the data COMESA uses is necessary in order to determine who would be the beneficiary of a standards-based information system. Thirdly, and from an organizational perspective, COMESA needs to designate a team to guide the process of creation of such a standards-based information system. Lastly there is need for involvement in consortia that are responsible for these data standards. This has an implication on organizational resources. In totality, this research provided a methodology for determination of the feasibility of a move towards standardization and hence makes it possible to answer the critical first stage questions such a move begs answers to.
2

Munyaradzi, Ngoni. "Transcription of the Bleek and Lloyd Collection using the Bossa Volunteer Thinking Framework." Thesis, University of Cape Town, 2013. http://pubs.cs.uct.ac.za/archive/00000913/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The digital Bleek and Lloyd Collection is a rare collection that contains artwork, notebooks and dictionaries of the earliest habitants of Southern Africa. Previous attempts have been made to recognize the complex text in the notebooks using machine learning techniques, but due to the complexity of the manuscripts the recognition accuracy was low. In this research, a crowdsourcing based method is proposed to transcribe the historical handwritten manuscripts, where volunteers transcribe the notebooks online. An online crowdsourcing transcription tool was developed and deployed. Experiments were conducted to determine the quality of transcriptions and accuracy of the volunteers compared with a gold standard. The results show that volunteers are able to produce reliable transcriptions of high quality. The inter-transcriber agreement is 80% for |Xam text and 95% for English text. When the |Xam text transcriptions produced by the volunteers are compared with the gold standard, the volunteers achieve an average accuracy of 69.69%. Findings show that there exists a positive linear correlation between the inter-transcriber agreement and the accuracy of transcriptions. The user survey revealed that volunteers found the transcription process enjoyable, though it was difficult. Results indicate that volunteer thinking can be used to crowdsource intellectually-intensive tasks in digital libraries like transcription of handwritten manuscripts. Volunteer thinking outperforms machine learning techniques at the task of transcribing notebooks from the Bleek and Lloyd Collection.
3

Ugail, Hassan, and Eyad Elyan. "Efficient 3D data representation for biometric applications." IOS Press, 2007. http://hdl.handle.net/10454/2683.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Yes
An important issue in many of today's biometric applications is the development of efficient and accurate techniques for representing related 3D data. Such data is often available through the process of digitization of complex geometric objects which are of importance to biometric applications. For example, in the area of 3D face recognition a digital point cloud of data corresponding to a given face is usually provided by a 3D digital scanner. For efficient data storage and for identification/authentication in a timely fashion such data requires to be represented using a few parameters or variables which are meaningful. Here we show how mathematical techniques based on Partial Differential Equations (PDEs) can be utilized to represent complex 3D data where the data can be parameterized in an efficient way. For example, in the case of a 3D face we show how it can be represented using PDEs whereby a handful of key facial parameters can be identified for efficient storage and verification.
4

Folmer, Brennan Thomas. "Metadata storage for file management systems data storage and representation techniques for a file management system /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
6

VanCalcar, Jenny E. (Jenny Elizabeth). "Collection and representation of GIS data to aid household water treatment and safe storage technology implementation in the northern region of Ghana." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34583.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2006.
Includes bibliographical references (leaves 46-51).
In 2005, a start-up social business called Pure Home Water (PHW) was begun in Ghana to promote and sell household water treatment and safe storage (HWTS) technologies. The original aim of the company was to offer a variety of products, allowing customers to choose the technology which best fit their individual needs. This differed from the typical implementation of HWTS promoters to date, in which an organization often distributes a single technology for the population to use. Instead, Pure Home Water wanted to give users a choice. PHW is also unique because they are attempting to sell their products without any subsidy. The goal is to create a sustainable business that will both bring better quality water to the population and be financially self-supporting. Because the company is new, a need existed to gather data on the demographic, health, and water and sanitation infrastructure within the region. Due to the geographic nature of the project, it was decided that a Geographic Information System (GIS) would be the best tool to store, analyze and represent the data.
(cont.) The system could be used to help plan relevant business strategies, and maps could be created to visually communicate important information among the Pure Home Water team and other interested parties. The final database did achieve the goal of collecting and bringing together important regional information in a form hopefully useful to PHW, future MIT teams and others. However, the use of the database for long-term planning is currently too advanced for the small company.
by Jenny E. VanCalcar.
M.Eng.
7

Elyan, Eyad, and Hassan Ugail. "Reconstruction of 3D human facial images using partial differential equations." Academy Publisher, 2007. http://hdl.handle.net/10454/2644.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
8

Fang, Cheng-Hung. "Application for data mining in manufacturing databases." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178653424.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sello, Mpho Constance. "Individual Document Management Techniques: an Explorative Study." Thesis, 2007. http://pubs.cs.uct.ac.za/archive/00000399/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Individuals are generating, storing and accessing more information than ever before. The information comes from a variety of sources such as the World Wide Web, email and books. Storage media is becoming larger and cheaper. This makes accumulation of information easy. When information is kept in large volumes, retrieving it becomes a problem unless there is a system in place for managing this. This study examined the techniques that users have devised to make retrieval of their documents easy and timely. A survey of user document management techniques was done through interviews. The uncovered techniques were then used to build an expert system that provides assistance with document management decision-making. The system provides recommendations on file naming and organization, document backup and archiving as well as suitable storage media. The system poses a series of questions to the user and offers recommendations on the basis of the responses given. The system was evaluated by two categories of users: those who had been interviewed during data collection and those who had not been interviewed. Both categories of users found the recommendations made by the system to be reasonable and indicated that the system was easy to use. Some users thought the system could be of great benefit to people new to computers.
10

Kovacevic, Vlado S. "The impact of bus stop micro-locations on pedestrian safety in areas of main attraction." 2005. http://arrow.unisa.edu.au:8081/1959.8/28389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
From the safety point of view, the bus stop is perhaps the most important part of the Bus Public Transport System, as it represents the point where bus passengers may interact directly with other road users and create conflicting situations leading to traffic accidents. For example, travellers could be struck walking to/from or boarding/alighting a bus. At these locations, passengers become pedestrians and at some stage crossing busy arterial roads at the bus stop in areas or at objects of main attraction usually outside of pedestrian designated facilities such as signal controlled intersections, zebra and pelican crossings. Pedestrian exposure to risk or risk-taking occurs when people want to cross the road in front of the stopped bus, at the rear of the bus or between the buses, particularly where bus stops are located on two-way roads (i.e. within the mid-block of the road with side streets, at non-signalised cross-section). However, it is necessary to have a better understanding of the pedestrian road-crossing risk exposure (pedestrian crossing distraction, obscurity and behaviour) within bus stop zones so that it can be incorporated into new design, bus stop placement, and evaluation of traffic management schemes where bus stop locations will play an increasingly important role. A full range of possible incidental interactions are presented in a tabular model that looks at the most common interacting traffic movements within bus stop zones. The thesis focused on pedestrian safety, discusses theoretical foundations of bus stops, and determines the types of accident risks between bus travellers as pedestrians and motor vehicles within the zones of the bus stop. Thus, the objectives of this thesis can be summarized as follows: (I) - Classification of bus stops, particularly according to objects of main attraction (pedestrian-generating activities); (II) - Analysis of traffic movement and interactions as an accident/risk exposure in the zone of bus stops with respect to that structure; (III) - Categorizing traffic accident in the vicinity of bus stops, and to analyse the interactions (interacting movements) that occur within bus stop zones in order to discover the nature of problems; (IV) - Formulation of tabular (pedestrian traffic accident prediction) models/forms (based on traffic interactions that creating and causing possibilities of accident conflict) for practical statistical methods of those accidents related to bus stop, and; (V) - Safety aspects related to the micro-location of bus stops to assist in the micro-location design, operations of bus stop safety facilities and safer pedestrian crossing for access between the bus stop and nearby objects of attraction. The scope of this thesis focuses on the theoretical foundation of bus stop microâ??location in areas of main attractions or at objects of main attraction, and traffic accident risk types as they occur between travellers as pedestrians and vehicle flow in the zone of the bus stop. The knowledge of possible interactions leads to the identification of potential conflict situations between motor vehicles and pedestrians. The problems discussed for each given conflict situation, has a great potential in increasing the knowledge needed to prevent accidents and minimise any pedestrian-vehicle conflict in this area and to aid in the development and planning of safer bus stops.

Книги з теми "Data Storage Representations":

1

Thompson, Rodney James. Towards a rigorous logic for spatial data representation. Delft: Netherlands Geodetic Commission, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kutyniok, Gitta. Shearlets: Multiscale Analysis for Multivariate Data. Boston: Birkhäuser Boston, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Emilio, Maurizio Di Paolo. Data Acquisition Systems: From Fundamentals to Applied Design. New York, NY: Springer New York, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Office, General Accounting. Data management: DOD should redirect its efforts to automate technical data repositories : report to the chairman, Committee on Government Operations, House of Representatives. Washington, D.C: The Office, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hameurlain, Abdelkader. Transactions on Large-Scale Data- and Knowledge-Centered Systems V. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

David, Hutchison. Transactions on Computational Science V: Special Issue on Cognitive Knowledge Representation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Furht, Borko. Handbook of Data Intensive Computing. New York, NY: Springer Science+Business Media, LLC, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Office, General Accounting. Navy supply systems: Status of two projects for improving stock point operations : fact sheet for the chairman, Subcommittee on Defense, Committee on Appropriations, House of Representatives. Washington, D.C: The Office, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Office, General Accounting. Navy supply systems: Status of two projects for improving stock point operations : fact sheet for the chairman, Subcommittee on Defense, Committee on Appropriations, House of Representatives. Washington, D.C: The Office, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hameurlain, Abdelkader. Transactions on Large-Scale Data- and Knowledge-Centered Systems VII. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Data Storage Representations":

1

Paradies, Marcus, and Hannes Voigt. "Graph Representations and Storage." In Encyclopedia of Big Data Technologies, 898–904. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Paradies, Marcus, and Hannes Voigt. "Graph Representations and Storage." In Encyclopedia of Big Data Technologies, 1–7. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_211-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Smith, William A. P. "3D Data Representation, Storage and Processing." In 3D Imaging, Analysis and Applications, 265–316. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Brazma, Alvis, Ugis Sarkans, Alan Robinson, Jaak Vilo, Martin Vingron, Jörg Hoheisel, and Kurt Fellenberg. "Microarray Data Representation, Annotation and Storage." In Advances in Biochemical Engineering/Biotechnology, 113–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45713-5_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Terletskyi, Dmytro. "Object-Oriented Knowledge Representation and Data Storage Using Inhomogeneous Classes." In Communications in Computer and Information Science, 48–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67642-5_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tolovski, Ilin, Sašo Džeroski, and Panče Panov. "Semantic Annotation of Predictive Modelling Experiments." In Discovery Science, 124–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In this paper, we address the task of representation, semantic annotation, storage, and querying of predictive modelling experiments. We introduce OntoExp, an OntoDM module which gives a more granular representation of a predictive modeling experiment and enables annotation of the experiment’s provenance, algorithm implementations, parameter settings and output metrics. This module is incorporated in SemanticHub, an online system that allows execution, annotation, storage and querying of predictive modeling experiments. The system offers two different user scenarios. The users can either define their own experiment and execute it, or they can browse the repository of completed experimental workflows across different predictive modelling tasks. Here, we showcase the capabilities of the system with executing multi-target regression experiment on a water quality prediction dataset using the Clus software. The system and created repositories are evaluated based on the FAIR data stewardship guidelines. The evaluation shows that OntoExp and SemanticHub provide the infrastructure needed for semantic annotation, execution, storage, and querying of the experiments.
7

Hoque, Abu Sayed M. Latiful. "Storage and Querying of High Dimensional Sparsely Populated Data in Compressed Representation." In Lecture Notes in Computer Science, 418–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36087-5_49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Purbey, Suniti, and Brijesh Khandelwal. "Analyzing Frameworks for IoT Data Storage, Representation and Analysis: A Statistical Perspective." In Lecture Notes in Networks and Systems, 472–88. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84760-9_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Paul, Razan, and Abu Sayed Md Latiful Hoque. "Optimized Column-Oriented Model: A Storage and Search Efficient Representation of Medical Data." In Information Technology in Bio- and Medical Informatics, ITBAM 2010, 118–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15020-3_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wycislik, Lukasz. "Storage Efficiency of LOB Structures for Free RDBMSs on Example of PostgreSQL and Oracle Platforms." In Beyond Databases, Architectures and Structures. Towards Efficient Solutions for Data Analysis and Knowledge Representation, 212–23. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58274-0_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Data Storage Representations":

1

Klisura, Ðorže. "Embedding Non-planar Graphs: Storage and Representation." In 7th Student Computer Science Research Conference. University of Maribor Press, 2021. http://dx.doi.org/10.18690/978-961-286-516-0.13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we propose a convention for repre-senting non-planar graphs and their least-crossing embeddings in a canonical way. We achieve this by using state-of-the-art tools such as canonical labelling of graphs, Nauty’s Graph6 string and combinatorial representations for planar graphs. To the best of our knowledge, this has not been done before. Besides, we implement the men-tioned procedure in a SageMath language and compute embeddings for certain classes of cubic, vertex-transitive and general graphs. Our main contribution is an extension of one of the graph data sets hosted on MathDataHub, and towards extending the SageMath codebase.
2

Bohm, Matt R., Robert B. Stone, and Simon Szykman. "Enhancing Virtual Product Representations for Advanced Design Repository Systems." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper describes the transformation of an existing set of heterogeneous product knowledge into a coherent design repository that supports product information archival, storage and reuse. Existing product information was analyzed and compared against desired outputs to ascertain what information management structure was needed to produce design resources pertinent to the design process. Several test products were cataloged to determine what information was essential without being redundant in representation. This set allowed for the creation of a novel single application point of entry for product information that maintains data consistency and allows information be easily exported. The exported information takes on many forms that are valuable to the design process such as a bill of materials and component function matrix. Enabling technologies include commercial software, XML (eXtensible Markup Language) data, XSL (eXtensible Stylesheet Language) transformation sheets and HTML (HyperText Markup Language). Through this process researchers at the University of Missouri – Rolla (UMR) have been able to dramatically improve the way in which artifact data is gathered, recorded and used.
3

Liang, Xiaoyuan, Martin Renqiang Min, Hongyu Guo, and Guiling Wang. "Learning K-way D-dimensional Discrete Embedding for Hierarchical Data Visualization and Retrieval." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Traditional embedding approaches associate a real-valued embedding vector with each symbol or data point, which is equivalent to applying a linear transformation to ``one-hot" encoding of discrete symbols or data objects. Despite simplicity, these methods generate storage-inefficient representations and fail to effectively encode the internal semantic structure of data, especially when the number of symbols or data points and the dimensionality of the real-valued embedding vectors are large. In this paper, we propose a regularized autoencoder framework to learn compact Hierarchical K-way D-dimensional (HKD) discrete embedding of symbols or data points, aiming at capturing essential semantic structures of data. Experimental results on synthetic and real-world datasets show that our proposed HKD embedding can effectively reveal the semantic structure of data via hierarchical data visualization and greatly reduce the search space of nearest neighbor retrieval while preserving high accuracy.
4

Menon, Jai, Ranjit Desai, and Jay Buckey. "Constraint-Based Reverse Engineering From Ultrasound Cross-Sections." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/dfm-4365.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper extends the “cross-sectional” approach for reverse engineering, used abundantly in biomedical applications, to the mechanical domain. We propose a combination of “projective” and cross-sectional algorithms for handling physical artifacts with complex topology and geometry. In addition, the paper introduces the concept of constraint-based reverse engineering, where the constraint parameters could include one or more of the following: time, storage (memory, disk-space), network bandwidth, Quality of Service (output-resolution), and so forth. We describe a specific reverse-engineering application which uses ultrasound (tilt-echo) imaging to reverse engineer spatial enumeration (volume) representations from cross-sectional data. The constraint here is time, and we summarize how our implementation can satisfy real-time reconstruction for distribution of the volume data on the internet. We present results that show volume representations computed from static objects. Since the algorithms are tuned to satisfy time constraints, this method is extendable to reverse engineer temporally-varying (elastic) objects. The current reverse engineering processing time is constrained by the data-acquisition (tilt-echo imaging) process, and the entire reverse engineering pipeline has been optimized to compute incremental volume representations in the order of 3 seconds on a network of four processors.
5

Malmqvist, Johan. "A Design System for Parametric Design of Complex Products." In ASME 1990 Design Technical Conferences. American Society of Mechanical Engineers, 1990. http://dx.doi.org/10.1115/detc1990-0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In this report a design system for parametric design of complex products is presented. The system is based on product models for complex products and components. These models were implemented as general framework models using the data base management system TORNADO. The user creates models for specific products by filling out forms. Consequently, all products can be represented in a uniform way. This gives the advantage that all service software (for storage in the product data base, retrieval of component data etc.) associated to a certain product model can be shared. The only lines of code the designer writes when creating a new component family are the design rules of the component. In association with the models a design system for complex products is presented. Benefitting from the uniform representations of the products, the system lets the user control the design process, while a system kernel handles the constraint management. The product models and the design system have been applied to hydraulic cylinder design.
6

Paul, Razan, and Abu Sayed Md Latiful Hoque. "A storage & search efficient representation of medical data." In 2010 International Conference on Bioinformatics and Biomedical Technology. IEEE, 2010. http://dx.doi.org/10.1109/icbbt.2010.5478926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tian, Yuan, Scott Klasky, Weikuan Yu, Bin Wang, Hasan Abbasi, Norbert Podhorszki, and Ray Grout. "DynaM: Dynamic Multiresolution Data Representation for Large-Scale Scientific Analysis." In 2013 IEEE 8th International Conference on Networking, Architecture, and Storage (NAS). IEEE, 2013. http://dx.doi.org/10.1109/nas.2013.21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kern, Daniel, and Anna Thornton. "Structured Indexing of Process Capability Data." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/dfm-34180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Process capability data can aid a design through ensuring the part tolerances are achievable with the current manufacturing capability. Many companies want to store process capability data in databases to make it available to all engineers. The success of a process capability database is highly dependent on the design of its structure and a method to index data for ease of input and retrieval. In this paper, the authors describe a new method of representing characteristics of a manufactured component using the attributes of feature, geometry, material, and process. This representation enables better storage and retrieval of process capability data. In addition, the authors describe a method for rapidly and robustly indexing components’ characteristics for entry into a process capability database.
9

McKenna, Ann F., Wei Chen, and Timothy W. Simpson. "Exploring the Impact of Virtual and Physical Dissection Activities on Students’ Understanding of Engineering Design Principles." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Product dissection has become a popular pedagogy for actively engaging engineering students in the classroom through practical hands-on experiences. Despite its numerous advantages, dissection of physical products has many drawbacks, including not only the costs required to start-up and maintain such activities but also the workspace and storage space needed for the products and tools used to dissect them. This paper presents results from on-going research that is investigating the extent to which dissection of virtual representations of products — what we refer to as virtual dissection — can be used in lieu of physical product dissection in the classroom. In particular, we found positive learning gains in students’ ability to identify and describe the function and production method of components contained in a hand-held power drill, for both physical and virtual dissection groups. However, the data also reveal differences in the overall maximum level attained as well as differences in the range and types of components identified between the groups. While we recognize that virtual dissection will never provide the same hands-on experiences as physical dissection, we contend that virtual dissection can be used effectively in the classroom to increase students’ understanding of engineering design principles. By substantiating this impact, we can help establish cost-effective sets of computer-based dissection activities that do not require extensive workspace and storage spaces and can be easily scaled to any size classroom.
10

Cevallos, Yesenia, Luis Tello-Oquendo, Deysi Inca, Nicolay Samaniego, Ivone Santillán, Amin Zadeh Shirazi, and Guillermo A. Gomez. "On the efficient digital code representation in DNA-based data storage." In NANOCOM '20: The Seventh Annual ACM International Conference on Nanoscale Computing and Communication. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411295.3411314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Data Storage Representations":

1

McPhedran, R., K. Patel, B. Toombs, P. Menon, M. Patel, J. Disson, K. Porter, A. John, and A. Rayner. Food allergen communication in businesses feasibility trial. Food Standards Agency, March 2021. http://dx.doi.org/10.46756/sci.fsa.tpf160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Clear allergen communication in food business operators (FBOs) has been shown to have a positive impact on customers’ perceptions of businesses (Barnett et al., 2013). However, the precise size and nature of this effect is not known: there is a paucity of quantitative evidence in this area, particularly in the form of randomised controlled trials (RCTs). The Food Standards Agency (FSA), in collaboration with Kantar’s Behavioural Practice, conducted a feasibility trial to investigate whether a randomised cluster trial – involving the proactive communication of allergen information at the point of sale in FBOs – is feasible in the United Kingdom (UK). Objectives: The trial sought to establish: ease of recruitments of businesses into trials; customer response rates for in-store outcome surveys; fidelity of intervention delivery by FBO staff; sensitivity of outcome survey measures to change; and appropriateness of the chosen analytical approach. Method: Following a recruitment phase – in which one of fourteen multinational FBOs was successfully recruited – the execution of the feasibility trial involved a quasi-randomised matched-pairs clustered experiment. Each of the FBO’s ten participating branches underwent pair-wise matching, with similarity of branches judged according to four criteria: Food Hygiene Rating Scheme (FHRS) score, average weekly footfall, number of staff and customer satisfaction rating. The allocation ratio for this trial was 1:1: one branch in each pair was assigned to the treatment group by a representative from the FBO, while the other continued to operate in accordance with their standard operating procedure. As a business-based feasibility trial, customers at participating branches throughout the fieldwork period were automatically enrolled in the trial. The trial was single-blind: customers at treatment branches were not aware that they were receiving an intervention. All customers who visited participating branches throughout the fieldwork period were asked to complete a short in-store survey on a tablet affixed in branches. This survey contained four outcome measures which operationalised customers’: perceptions of food safety in the FBO; trust in the FBO; self-reported confidence to ask for allergen information in future visits; and overall satisfaction with their visit. Results: Fieldwork was conducted from the 3 – 20 March 2020, with cessation occurring prematurely due to the closure of outlets following the proliferation of COVID-19. n=177 participants took part in the trial across the ten branches; however, response rates (which ranged between 0.1 - 0.8%) were likely also adversely affected by COVID-19. Intervention fidelity was an issue in this study: while compliance with delivery of the intervention was relatively high in treatment branches (78.9%), erroneous delivery in control branches was also common (46.2%). Survey data were analysed using random-intercept multilevel linear regression models (due to the nesting of customers within branches). Despite the trial’s modest sample size, there was some evidence to suggest that the intervention had a positive effect for those suffering from allergies/intolerances for the ‘trust’ (β = 1.288, p<0.01) and ‘satisfaction’ (β = 0.945, p<0.01) outcome variables. Due to singularity within the fitted linear models, hierarchical Bayes models were used to corroborate the size of these interactions. Conclusions: The results of this trial suggest that a fully powered clustered RCT would likely be feasible in the UK. In this case, the primary challenge in the execution of the trial was the recruitment of FBOs: despite high levels of initial interest from four chains, only one took part. However, it is likely that the proliferation of COVID-19 adversely impacted chain participation – two other FBOs withdrew during branch eligibility assessment and selection, citing COVID-19 as a barrier. COVID-19 also likely lowered the on-site survey response rate: a significant negative Pearson correlation was observed between daily survey completions and COVID-19 cases in the UK, highlighting a likely relationship between the two. Limitations: The trial was quasi-random: selection of branches, pair matching and allocation to treatment/control groups were not systematically conducted. These processes were undertaken by a representative from the FBO’s Safety and Quality Assurance team (with oversight from Kantar representatives on pair matching), as a result of the chain’s internal operational restrictions.

До бібліографії