Academic literature on the topic 'Flexible indexing technique'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Flexible indexing technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Flexible indexing technique"

1

BARRANCO, CARLOS D., JESÚS R. CAMPAÑA, and JUAN M. MEDINA. "INDEXING FUZZY NUMERICAL DATA WITH A B+ TREE FOR FAST RETRIEVAL USING NECESSITY-MEASURED FLEXIBLE CONDITIONS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 17, supp01 (August 2009): 1–23. http://dx.doi.org/10.1142/s0218488509006005.

Full text
Abstract:
This paper proposes an indexing procedure for improving the performance of query processing on a fuzzy database. It focuses on the case when a necessity-measured atomic flexible condition is imposed on the values of a fuzzy numerical attribute. The proposal is to apply a classical indexing structure for numerical crisp data, a B +-tree combined with a Hilbert curve. The use of such a common indexing technique makes its incorporation into current systems straightforward. The efficiency of the proposal is compared with that of another indexing procedure for similar fuzzy data and flexible query types. Experimental results reveal that the performance of the proposed method is similar and more stable than that of its competitor.
APA, Harvard, Vancouver, ISO, and other styles
2

Sengupta, Arijit, and Ramesh Venkataraman. "DocBase." Journal of Database Management 22, no. 4 (October 2011): 30–56. http://dx.doi.org/10.4018/jdm.2011100102.

Full text
Abstract:
This article introduces a complete storage and retrieval architecture for a database environment for XML documents. DocBase, a prototype system based on this architecture, uses a flexible storage and indexing technique to allow highly expressive queries without the necessity of mapping documents to other database formats. DocBase is an integration of several techniques that include (i) a formal model called Heterogeneous Nested Relations (HNR), (ii) a conceptual model XER (Extensible Entity Relationship), (ii) formal query languages (Document Algebra and Calculus), (iii) a practical query language (Document SQL or DSQL), (iv) a visual query formulation method with QBT (Query By Templates), and (v) the DocBase query processing architecture. This paper focuses on the overall architecture of DocBase including implementation details, describes the details of the query-processing framework, and presents results from various performance tests. The paper summarizes experimental and usability analyses to demonstrate its feasibility as a general architecture for native as well as embedded document manipulation methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Ramzan, Bajwa, Kazmi, and Amna. "Challenges in NoSQL-Based Distributed Data Storage: A Systematic Literature Review." Electronics 8, no. 5 (April 30, 2019): 488. http://dx.doi.org/10.3390/electronics8050488.

Full text
Abstract:
Key-Value stores (KVSs) are the most flexible and simplest model of NoSQL databases, which have become highly popular over the last few years due to their salient features such as availability, portability, reliability, and low operational cost. From the perspective of software engineering, the chief obstacle for KVSs is to achieve software quality attributes (consistency, throughput, latency, security, performance, load balancing, and query processing) to ensure quality. The presented research is a Systematic Literature Review (SLR) to find the state-of-the-art research in the KVS domain, and through doing so determine the major challenges and solutions. This work reviews the 45 papers between 2010–2018 that were found to be closely relevant to our study area. The results show that performance is addressed in 31% of the studies, consistency is addressed in 20% of the studies, latency and throughput are addressed in 16% of the studies, query processing is addressed in 13% of studies, security is addressed in 11% of the studies, and load balancing is addressed in 9% of the studies. Different models are used for execution. The indexing technique was used in 20% of the studies, the hashing technique was used in 13% of the studies, the caching and security techniques were used together in 9% of the studies, the batching technique was used in 5% of the studies, the encoding techniques and Paxos technique were used together in 4% of the studies, and 36% of the studies used other techniques. This systematic review will enable researchers to design key-value stores as efficient storage. Regarding future collaborations, trust and privacy are the quality attributes that can be addressed; KVS is an emerging facet due to its widespread popularity, opening the way to deploy it with proper protection.
APA, Harvard, Vancouver, ISO, and other styles
4

Otepka, J., G. Mandlburger, M. Schütz, N. Pfeifer, and M. Wimmer. "EFFICIENT LOADING AND VISUALIZATION OF MASSIVE FEATURE-RICH POINT CLOUDS WITHOUT HIERARCHICAL ACCELERATION STRUCTURES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 293–300. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-293-2020.

Full text
Abstract:
Abstract. Nowadays, point clouds are the standard product when capturing reality independent of scale and measurement technique. Especially, Dense Image Matching (DIM) and Laser Scanning (LS) are state of the art capturing methods for a great variety of applications producing detailed point clouds up to billions of points. In-depth analysis of such huge point clouds typically requires sophisticated spatial indexing structures to support potentially long-lasting automated non-interactive processing tasks like feature extraction, semantic labelling, surface generation, and the like. Nevertheless, a visual inspection of the point data is often necessary to obtain an impression of the scene, roughly check for completeness, quality, and outlier rates of the captured data in advance. Also intermediate processing results, containing additional per-point computed attributes, may require visual analyses to draw conclusions or to parameterize further processing. Over the last decades a variety of commercial, free, and open source viewers have been developed that can visualise huge point clouds and colorize them based on available attributes. However, they have either a poor loading and navigation performance, visualize only a subset of the points, or require the creation of spatial indexing structures in advance. In this paper, we evaluate a progressive method that is capable of rendering any point cloud that fits in GPU memory in real time without the need of time consuming hierarchical acceleration structure generation. In combination with our multi-threaded LAS and LAZ loaders, we achieve load performance of up to 20 million points per second, display points already while loading, support flexible switching between different attributes, and rendering up to one billion points with visually appealing navigation behaviour. Furthermore, loading times of different data sets for different open source and commercial software packages are analysed.
APA, Harvard, Vancouver, ISO, and other styles
5

You, Jane, Qin Li, and Jinghua Wang. "On Hierarchical Content-Based Image Retrieval by Dynamic Indexing and Guided Search." International Journal of Cognitive Informatics and Natural Intelligence 4, no. 4 (October 2010): 18–36. http://dx.doi.org/10.4018/jcini.2010100102.

Full text
Abstract:
This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing. It also provides an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features. Experimental results confirm that the new approach is feasible for content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Sung-Hwan, Ki-Joune Li, and Hwan-Gue Cho. "A Flexible Framework for Covering and Partitioning Problems in Indoor Spaces." ISPRS International Journal of Geo-Information 9, no. 11 (October 23, 2020): 618. http://dx.doi.org/10.3390/ijgi9110618.

Full text
Abstract:
Utilizing indoor spaces has become important with the progress of localization and positioning technologies. Covering and partitioning problems play an important role in managing, indexing, and analyzing spatial data. In this paper, we propose a multi-stage framework for indoor space partitioning, each stage of which can be flexibly adjusted according to target applications. One of the main features of our framework is the parameterized constraint, which characterizes the properties and restrictions of unit geometries used for the covering and partitioning tasks formulated as the binary linear programs. It enables us to apply the proposed method to various problems by simply changing the constraint parameter. We present basic constraints that are widely used in many covering and partitioning problems regarding the indoor space applications along with several techniques that simplify the computation process. We apply it to particular applications, device placement and route planning problems, in order to give examples of the use of our framework in the perspective on how to design a constraint and how to use the resulting partitions. We also demonstrate the effectiveness with experimental results compared to baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, David G. "A New Form of Water Quality Index for Rivers and Streams." Water Science and Technology 21, no. 2 (February 1, 1989): 123–27. http://dx.doi.org/10.2166/wst.1989.0038.

Full text
Abstract:
To assist in the dissemination of water quality information to lay-people in particular, four suitability-for-use water quality indexes have been developed. The water uses are: General, Bathing, Supply, and Fish Spawning although in the Bathing and Supply Indexes protection of aquatic life is also considered. To ensure that they tell us something useful and do not ‘hide' important information as current indexing systems tend to do, the Minimum Operator has been employed as the sub-index aggregation mechanism. This is a robust, sensible, and flexible method and seems more appropriate for this type of index than the more commonly used techniques (e.g. additive and multiplicative). Index development has been keyed into proposed New Zealand water legislation although this is not a pre-requisite for their use.
APA, Harvard, Vancouver, ISO, and other styles
8

Zdravkovski, Zoran. "Macedonian Journal of Chemistry and Chemical Engineering: Open Journal Systems - Editor's Perspective." PRILOZI 35, no. 3 (December 1, 2014): 51–55. http://dx.doi.org/10.1515/prilozi-2015-0008.

Full text
Abstract:
AbstractThe development and availability of personal computers and software as well as printing techniques in the last twenty years have made a profound change in the publication of scientific journals. Additionally, the Internet in the last decade has revolutionized the publication process to the point of changing the basic paradigm of printed journals. The Macedonian Journal of Chemistry and Chemical Engineering in its 40-year history has adopted and adapted to all these transformations. In order to keep up with the inevitable changes, as editor-in-chief I felt my responsibility was to introduce an electronic editorial managing of the journal. The choice was between commercial and open source platforms, and because of the limited funding of the journal we chose the latter. We decided on Open Journal Systems, which provided online submission and management of all content, had flexible configuration - requirements, sections, review process, etc., had options for comprehensive indexing, offered various reading tools, had email notification and commenting ability for readers, had an option for thesis abstracts and was installed locally. However, since there is limited support it requires a moderate computer knowledge/skills and effort in order to set up. Overall, it is an excellent editorial platform and a convenient solution for journals with a low budget or journals that do not want to spend their resources on commercial platforms or simply support the idea of open source software.
APA, Harvard, Vancouver, ISO, and other styles
9

Alkadi, Ihssan. "Data Mining." Review of Business Information Systems (RBIS) 12, no. 1 (January 1, 2008): 17–24. http://dx.doi.org/10.19030/rbis.v12i1.4394.

Full text
Abstract:
Recently data mining has become more popular in the information industry. It is due to the availability of huge amounts of data. Industry needs turning such data into useful information and knowledge. This information and knowledge can be used in many applications ranging from business management, production control, and market analysis, to engineering design and science exploration. Database and information technology have been evolving systematically from primitive file processing systems to sophisticated and powerful databases systems. The research and development in database systems has led to the development of relational database systems, data modeling tools, and indexing and data organization techniques. In relational database systems data are stored in relational tables. In addition, users can get convenient and flexible access to data through query languages, optimized query processing, user interfaces and transaction management and optimized methods for On-Line Transaction Processing (OLTP). The abundant data, which needs powerful data analysis tools, has been described as a data rich but information poor situation. The fast-growing, tremendous amount of data, collected and stored in large and numerous databases. Humans can not analyze these large amounts of data. So we need powerful tools to analyze this large amount of data. As a result, data collected in large databases become data tombs. These are data archives that are seldom visited. So, important decisions are often not made based on the information-rich data stored in databases rather based on a decision maker's intuition. This is because the decision maker does not have the tools to extract the valuable knowledge embedded in the vast amounts of data. Data mining tools which perform data analysis may uncover important data patterns, contributing greatly to business strategies, knowledge bases, and scientific and medical research. So data mining tools will turn data tombs into golden nuggets of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
10

Attiogbé, Christian, Flavio Ferrarotti, and Sofian Maabout. "Advances and Challenges for Model and Data Engineering." JUCS - Journal of Universal Computer Science 27, no. 7 (July 28, 2021): 646–49. http://dx.doi.org/10.3897/jucs.70972.

Full text
Abstract:
Following the stimulating discussions in the workshops held during the 9th International Conference on Model and Data Engineering (MEDI 2019), we proposed to edit a special issue compiling the fruitful research resulting from those discussions. This special issue on current research in model and data engineering of the Journal of Universal Computer Science is the outcome of that proposal. As such, it contains thoroughly revised and significantly extended versions of key papers discussed at MEDI 2019 workshops. The main objective of MEDI is to provide a forum for the dissemination of research accomplishments and to promote the interaction and collaboration between the models and data research communities. MEDI provides an international platform for the pre- sentation of research on models and data theory, development of advanced technologies related to models and data and their advanced applications. This international scientific event, initiated by researchers from Euro-Mediterranean countries in 2011, aims also at promoting the creation of north-south scientific networks, projects and faculty/student exchanges. The following seven accepted papers nicely reflect the wide range of topics covered by MEDI conferences. In their paper “Enhancing GDPR Compliance Through Data Sensitivity and Data Hiding Tools”, Xabier Larrucea, Micha Moffie and Dan Mor consider the problem of fulfilling the rules set by the General Data Protection Regulation (GDPR) of the EU within the framework of the reference architectural model industry 4.0 for the healthcare sector. This is challenging due to the highly sensitive data managed by this sector and the need to share this data between different national healthcare providers within the EU. The authors propose and implement a series of valuable tools to enhance security and privacy in this context as well as compliance with the GDPR. They also illustrate through a case study the use of the proposed tools for sharing health records and their integration within the reference framework. In their paper “BSO-MV: An Optimized Multiview Clustering Approach for Items Recommendation in Social Networks”, Lamia Berkani, Lylia Betit and Louiza Belarif present a new approach to improve accuracy and coverage of clustering based recommendations systems for social networks. The approach is based on improving the results of multiview clustering by combining it with a bees swarm optimization algorithm. Through extensive experimentation with two real-world datasets, they are able to demonstrate the effectiveness of the proposed approach to significantly improve accuracy, outperforming others clustering-based approaches. In their paper “A Formal Model for Configurable Business Process with Optimal Cloud Resource Allocation”, Abderrahim Ait Wakrime, Souha Boubaker, Slim Kallel, Emna Guermazi and Walid Gaaloul propose a formal approach to analyse and verify con- figurable business process models as well as to optimize the cost of their implementation in the Cloud. The mechanism consists on transforming the problem into an equivalent Boolean satisfiability problem (SAT) which is then feed to a solver. This transformation is done by means of translation rules from configurable business processes to SAT. This model formalizes the different configurable process behaviors including control-flow and cloud resource allocations, enabling the derivation of correct configuration variants. Weighted partial SAT formulae are integrated in the model in order to optimize the global cloud resource allocation cost. In their paper “Towards a Semantic Graph-based Recommender System: A Case Study of Cultural Heritage”, Sara Qassimi and El Hassan Abdelwahed present a semantic graph-based recommender system of cultural heritage places. Their approach consists on first constructing an emergent description that semantically augments the information about the places of interest and then model through graphs the semantic relationships between similar cultural heritage places and their associated tags. Note that the unsuper- vised nature of folksonomy’s tags semantically weakens the description of resources, which in turn hinders their indexing and decreases the quality of their classification and clustering. The semantic augmentation produced by the proposed method in the case study of cultural heritage places in Marrakesh city shows to be an effective tool to fight information overload and to produce better recommendations in this context. As such, the paper presents a valuable contribution that can be used to improve the quality of recommender systems in general. In their paper “Assembling the Web of Things and Microservices for the Management of Cyber-Physical Systems”, Manel Mena, Javier Criado, Luis Iribarne and Antonio Corral face the challenge of facilitating communication between the diverse devices and protocols used by Cyber-Physical Systems (CPS) and the Internet of Things (IoT). They propose an approach based on the concept of digital dice (an abstraction of various objects). The digital dice builds on the web of things standard. It is based on microservices and capable of handling the interaction and virtualization of IoT devices. This work introduces a technique to build, transform and compose digital dices from descriptions of “things”. A full transformation flow is presented and a case study is used to illustrate its implementation. The proposal is shown to be effective and flexible, improving the state of the art. In their paper “Model-Driven Engineering for End-Users in the Loop in Smart Ambient Systems”, Sylvie Trouilhet, Jean-Paul Arcangeli, Jean-Michel Bruel and Maroun Koussaifi present a Model-Driven Engineering (MDE) approach to involve the user in the process of constructing at run time component based applications, adapted to a situation and user needs, in the context of ambient systems. The proposed solution relies on several domain-specific languages and a transformation process, based on established MDE tools (Gemoc Studio, Eclipse Modeling Framework, EcoreTools, Sirius and Acceleo). In this context, the authors describe an innovative way of reinforcing the place of the user in the engineering loop. The authors propose an editor that allows the end user to be aware of the emerging applications resulting of this process, to understand their function and use, and to modify them if desired. From these actions, feedback data are extracted to improve the process. In their paper “An Approach for Testing False Data Injection Attack on Data Depen- dent Industrial Devices”, Mathieu Briland and Fabrice Bouquet present a domain specific language (DSL) for generating test data for IoT devices/environments. The DSL is proposed for testing and simulating false data injection attacks (FDIA). First, the paper outlines a generic approach for FDIA and presents a list of possible sensor types and a categorization schema for data obtained from sensors. Then, the application of the DSL is illustrated using two examples; a simple one altering the data obtained from a temperature sensor and a more complex one concurrently altering the data obtained from three particle sensors. The authors show that their approach works well in the case study of the Flowbird parking meter system and discuss how it can be adapted to different application domains. We are grateful to all authors of journal articles in this issue, who contributed to a fine collection of research in model and data engineering. We would like to express our greatest thanks to all reviewers, who put in a lot of time reading the articles and making substantial suggestions for improvement, which at the end led to the high quality. We also would like to thank J.UCS evaluation committee for the opportunity to publish this collection of research articles as a special issue of the Journal of Universal Computer Science and in particular to the publishing managers Dana Kaiser and Johanna Zeisberg for their timeless assistance during the whole process. Last but not least, we would like to acknowledge to our host institutions, the University of Nantes and the Software Competence Center Hagenberg (SCCH) for their support and sponsoring of this special issue. In particular, Prof. Yamine Ait-Ameur and its host institute IRIT/INP-ENSEEIHT have significantly collaborated with this special issue in the framework of the COMET scientific partnership agreement with SCCH, and have also supported the MEDI confer- ence from which it originated. Christian Attiogbé, Flavio Ferrarotti and Sofian Maabout (July, 2021)
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Flexible indexing technique"

1

Tjondronegoro, Dian W., and mikewood@deakin edu au. "Content-based video indexing for sports applications using integrated multi-modal approach." Deakin University. School of Information Technology, 2005. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20051110.122059.

Full text
Abstract:
This thesis presents a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple (audio-visual) modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s). The main objective is to contribute to the three major components of sports video indexing systems. The first component is a set of powerful techniques to extract audio-visual features and semantic contents automatically. The main purposes are to reduce manual annotations and to summarize the lengthy contents into a compact, meaningful and more enjoyable presentation. The second component is an expressive and flexible indexing technique that supports gradual index construction. Indexing scheme is essential to determine the methods by which users can access a video database. The third and last component is a query language that can generate dynamic video summaries for smart browsing and support user-oriented retrievals.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Flexible indexing technique"

1

Yazıcı, Adnan, Çagrı İnce, and Murat Koyuncu. "An Indexing Technique for Similarity-Based Fuzzy Object-Oriented Data Model." In Flexible Query Answering Systems, 334–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25957-2_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Borin, Lars, and Dimitrios Kokkinakis. "Literary Onomastics and Language Technology." In Literary Education and Digital Learning, 53–78. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-932-8.ch003.

Full text
Abstract:
In this chapter, the authors describe the development and application of language technology for intelligent information access to the content of digitized cultural heritage collections in the form of Swedish classical literary works. This technology offers sophisticated and flexible support functions to literary scholars and researchers. The authors focus on one kind of text processing technology (named entity recognition) and one research field (literary onomastics), but try to argue that the techniques involved are quite general and can be further developed in a number of directions. This way, the authors aim at supporting the users of digitized literature collections with tools that enable semantic search, browsing and indexing of texts. In this sense, the authors offer new ways for exploring the large volumes of literary texts being made available through national cultural heritage digitization projects. Language technology; Computational linguistics; Natural language processing; Literary onomastics; Named entity recognition; Corpus linguistics; Corpus annotation; Digital resources; Text technology; Cultural heritage
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Flexible indexing technique"

1

Sajjad, Farasdaq, Jemi Jaenudin, Steven Chandra, Alvin Wirawan, Annisa Prawesti, M. Gemareksha Muksin, Wisnu Agus Nugroho, Ecep Muhammad Mujib, and Savinatun Naja. "Data-Driven Multi-Asset Optimisation Under Uncertainty: A Case Study Using the New Indonesia's Fiscal Policy." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21425-ms.

Full text
Abstract:
Abstract Optimizing multiple assets under uncertain techno-economic conditions and tight government policies is challenging. Operator needs to establish flexible Plan of Development (POD)s and put priority in developing multiple fields. The complexity of production and the profit margin should be simultaneously evaluated. In this work, we present a new workflow to perform such a rigorous optimization under uncertainty using the case study of PHE ONWJ, Indonesia. We begin the workflow by identifying the uncertain parameters and their prior distributions. We classify the parameters into three main groups: operations-related (geological complexity, reserves, current recovery, surface facilities, and technologies), company-policies-related (future exploration plan, margin of profit, and oil/gas price), and government-related (taxes, incentives, and fiscal policies). A unique indexing technique is developed to allow numerical quantification and adapt with dynamic input. We then start the optimization process by constructing time-dependent surrogate model through training with Monte Carlo sampling. We then perform optimization under uncertainty with multiple scenarios. The objective function is the overall Net Present Value (NPV) obtained by developing multiple fields. This work emphasizes the importance of the use of time-dependent surrogate approach to account risk in the optimization process. The approach revises the prior distribution with narrow-variance distribution to make reliable decision. The Global Sensitivity Analysis (GSA) with Sobol decomposition on the posterior distribution and surrogate provides parameters’ ranking and list of heavy hitters. The first output from this workflow is the narrow-variance posterior distribution. This result helps to locate the sweet spots. By analyzing them, operator can address specific sectors, which are critical to the NPV. PHE ONWJ, as the biggest operator in Indonesia, has geologically scattered assets, therefore, this first output is essential. The second output is the list of heavy hitters from GSA. This list is a tool to cluster promising fields for future development and prioritize their development based on the impact towards NPV. Since all risks are carried by the operator under the current Gross Split Contract, this result is advantageous for decision-making process. We introduce a new approach to perform time-dependent, multi-asset optimization under uncertainty. This new workflow is impactful for operators to create robust decision after considering the associated risks.
APA, Harvard, Vancouver, ISO, and other styles
2

Morrison, James, David Christie, Charles Greenwood, Ruairi Maciver, and Arne Vogler. "Software Analysis Tools for Wave Sensors." In ASME 2015 34th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/omae2015-41852.

Full text
Abstract:
This paper presents a set of software tools for interrogating and processing time series data. The functionality of this toolset will be demonstrated using data from a specific deployment involving multiple sensors deployed for a specific time period. The approach was developed initially for Datawell Waverider MKII/MKII buoys [1] and expanded to include data from acoustic devices in this case Nortek AWACs. Tools of this nature are important to address a specific lack of features in the sensor manufacturers own tools. It also helps to develop standard approaches for dealing with anomalous data from sensors. These software tools build upon an effective modern interpreted programming language in this case Python which has access to high performance low level libraries. This paper demonstrates the use of these tools applied to a sensor network based on the North West coast of Scotland as described in [2,3]. Examples can be seen of computationally complex data being easily calculated for monthly averages. Analysis down to a wave by wave basis will also be demonstrated form the same source dataset. The tools make use of a flexible data structure called a DataFrame which supports mixed data types, hierarchical and time indexing and is also integrated with modern plotting libraries. This allows sub second querying and the ability for dynamic plotting of large datasets. By using modern compression techniques and file formats it is possible to process datasets which are larger than memory datasets without the need for a traditional relational database. The software library shall be of use to a wide variety of industry involved in offshore engineering along with any scientists interested in the coastal environment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography