Дисертації з теми "Query Executor"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Query Executor.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-36 дисертацій для дослідження на тему "Query Executor".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zeuch, Steffen. "Query Execution on Modern CPUs." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19296.

Повний текст джерела
Анотація:
Über die letzten Jahrzehnte haben sich Datenbanken von festplatten-basierten zu hauptspeicher-basierten Datenbanksystemen entwickelt. Um diese Herausforderungen anzugehen und das volle Potenzial moderner Prozessoren zu erschließen, stellt diese Dissertation vier Ansätze vor um den Einfluss der „Memory Wall“ zu reduzieren. Der erste Ansatz zeigt auf, wie spezielle Prozessorinstruktionen (sogenannte SIMD Instruktionen) die Ausnutzung von Caches erhöhen und gleichzeitig die Anzahl der Instruktionen verringern. In dieser Arbeit werden dazu vorhandene Baumstrukturen so angepasst, dass diese SIMD Instruktionen verwendet werden können und somit die benötigte Hauptspeicherbandbreite verringert wird. Der zweite Ansatz dieser Arbeit führt ein Model ein, welches es ermöglicht die Anfrageausführung in verschiedenen Datenbanksystemen zu vereinheitlichen und dadurch vergleichbar zu machen. Durch diese Vereinheitlichung wird es möglich, die Hardwareausnutzung durch Hinzunahme von Wissen über die auszuführende Hardware zu optimieren. Der dritte Ansatz analysiert verschiedene Datenbankoperatoren bezüglich ihres Verhaltens auf verschiedenen Hardwareumgebungen. Diese Analyse ermöglicht es, Datenbankoperatoren besser zu verstehen und Kostenmodelle für ihr Verhalten zu entwickeln. Der vierte Ansatz dieser Arbeit baut auf der Analyse der Operatoren auf und führt einen progressiven Optimierungsalgorithmus ein, der die Ausführung von Anfragen zur Laufzeit auf die jeweiligen Bedingungen wie z.B. Daten- oder Hardwareeigenschaften anpasst. Dazu werden zur Laufzeit prozessorinterne Zähler verwendet, die das Verhalten des Operators auf der jeweiligen Hardware widerspiegeln.
Over the last decades, database systems have been migrated from disk to memory architectures such as RAM, Flash, or NVRAM. Research has shown that this migration fundamentally shifts the performance bottleneck upwards in the memory hierarchy. Whereas disk-based database systems were largely dominated by disk bandwidth and latency, in-memory database systems mainly depend on the efficiency of faster memory components, e.g., RAM, caches, and registers. To encounter these challenges and enable the full potential of the available processing power of modern CPUs for database systems, this thesis proposes four approaches to reduce the impact of the Memory Wall. First, SIMD instructions increase the cache line utilization and decrease the number of executed instructions if they operate on an appropriate data layout. Thus, we adapt tree structures for processing with SIMD instructions to reduce demands on the memory bus and processing units are decreased. Second, by modeling and executing queries following a unified model, we are able to achieve high resource utilization. Therefore, we propose a unified model that enables us to utilize knowledge about the query plan and the underlying hardware to optimize query execution. Third, we need a fundamental knowledge about the individual database operators and their behavior and requirements to optimally distribute the resources among available computing units. We conduct an in-depth analysis of different workloads using performance counters create these insights. Fourth, we propose a non-invasive progressive optimization approach based on in-depth knowledge of individual operators that is able to optimize query execution during run-time. In sum, using additional run-time statistics gathered by performance counters, a unified model, and SIMD instructions, this thesis improves query execution on modern CPUs.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

AYRES, FAUSTO VERAS MARANHAO. "QEEF: AN EXTENSIBLE QUERY EXECUTION ENGINE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5110@1.

Повний текст джерела
Анотація:
CENTRO FEDERAL DE EDUCAÇÃO TECNOLÓGICA CELSO SUCKOW FONSECA
O processamento de consultas em Sistemas de Gerência de Banco de Dados tradicionais tem sido largamente estudado na literatura e utilizado comercialmente com enorme sucesso. Isso é devido, em parte, à eficiência das Máquinas de Execução de Consultas (MEC) no suporte ao modelo de execução tradicional. Porém, o surgimento de novos cenários de aplicação, principalmente em conseqüência do modelo computacional da web, motivou a pesquisa de novos modelos de execução, tais como: modelo adaptável e modelo contínuo, além da pesquisa de modelos de dados semi-estruturados, tal como o XML, ambos não suportados pelas MEC tradicionais. O objetivo desta tese consiste no desenvolvimento de uma MEC extensível frente a diferentes modelos de execução e de dados. Adicionalmente, esta proposta trata de maneira ortogonal o modelo de execução e o modelo de dados, o que permite a avaliação de planos de execução de consultas (PEC) com fragmentos em diferentes modelos. Utilizou-se a técnica de framework de software para a especificação da MEC extensível, produzindo o framework QEEF (Query Execution Engine Framework). A extensibilidade da solução reflete-se em um meta-modelo, denominado QUEM (QUery Execution Meta-model), capaz de exprimir diferentes modelos em um meta-PEC. O framework QEEF pré-processa um meta-PEC e produz um PEC final a ser avaliado pela MEC instanciada. Como parte da validação desta proposta, instanciou-se o QEEF para diferentes modelos de execução e de dados.
Querying processing in traditional Database Management Systems (DBMS) has been extensively studied in the literature and adopted in industry. Such success is, in part, due to the performance of their Query Execution Engines (QEE) for supporting the traditional query execution model. The advent of new query scenarios, mainly due to the web computational model, has motivate the research on new execution models such as: adaptive and continuous, and on semistructured data models, such as XML, both not natively supported by traditional query engines. This thesis proposes the development of an extensible QEE adapted to the new execution and data models. Achieving this goal, we use a software design approach based on framework technique to produce the Query Execution Engine Framework (QEEF). Moreover, we address the question of the orthogonality between execution and data models, witch allows for executing query execution plans (QEP) with fragments in different models. The extensibility of our solution is specified by in a QEP by an execution meta- model named QUEM (QUery Execution Meta-model) used to express different models in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF producing a final QEP to be evaluated by the running QEE. The QEEF is instantiated for different execution and data models as part of the validation of this proposal.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lundquist, Andreas. "Combining Result Size Calculation and Query Execution for the GraphQL Query Language." Thesis, Linköpings universitet, Databas och informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167086.

Повний текст джерела
Анотація:
GraphQL is an open source framework for creating adaptable API layers and was developed and released by Facebook in 2015. GraphQL includes both a query language and an execution engine and has quickly gained popularity among developers. However, GraphQL suffers from a problem, certain types of queries can generate huge response objects that grow exponentially in size. A recent research paper proposed a solution to this problem in the form of an algorithm that calculates the result size of a query without executing the query itself. The algorithm enables the possibility to decide if a query should be executed based on the query response size. In an implementation and evaluation of this algorithm, it was found that for simple queries, running the algorithm takes approximately the same time as executing the query, which means that the total query processing time is doubled. This thesis proposes a way to solve that problem by introducing an extended algorithm that combines the result size calculation with query execution. An implementation of the extended algorithm was evaluated and shows that it is a viable option that only has a minor impact on the total query processing time for most queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abadi, Daniel J. "Query execution in column-oriented database systems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43043.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 145-148).
There are two obvious ways to map a two-dimension relational database table onto a one-dimensional storage interface: store the table row-by-row, or store the table column-by-column. Historically, database system implementations and research have focused on the row-by row data layout, since it performs best on the most common application for database systems: business transactional data processing. However, there are a set of emerging applications for database systems for which the row-by-row layout performs poorly. These applications are more analytical in nature, whose goal is to read through the data to gain new insight and use it to drive decision making and planning. In this dissertation, we study the problem of poor performance of row-by-row data layout for these emerging applications, and evaluate the column-by-column data layout opportunity as a solution to this problem. There have been a variety of proposals in the literature for how to build a database system on top of column-by-column layout. These proposals have different levels of implementation effort, and have different performance characteristics. If one wanted to build a new database system that utilizes the column-by-column data layout, it is unclear which proposal to follow. This dissertation provides (to the best of our knowledge) the only detailed study of multiple implementation approaches of such systems, categorizing the different approaches into three broad categories, and evaluating the tradeoffs between approaches. We conclude that building a query executer specifically designed for the column-by-column query layout is essential to archive good performance. Consequently, we describe the implementation of C-Store, a new database system with a storage layer and query executer built for column-by-column data layout. We introduce three new query execution techniques that significantly improve performance. First, we look at the problem of integrating compression and execution so that the query executer is capable of directly operating on compressed data. This improves performance by improving I/O (less data needs to be read off disk), and CPU (the data need not be decompressed). We describe our solution to the problem of executer extensibility - how can new compression techniques be added to the system without having to rewrite the operator code? Second, we analyze the problem of tuple construction (stitching together attributes from multiple columns into a row-oriented "tuple").
(cont.) Tuple construction is required when operators need to access multiple attributes from the same tuple; however, if done at the wrong point in a query plan, a significant performance penalty is paid. We introduce an analytical model and some heuristics to use that help decide when in a query plan tuple construction should occur. Third, we introduce a new join technique, the "invisible join" that improves performance of a specific type of join that is common in the applications for which column-by-column data layout is a good idea. Finally, we benchmark performance of the complete C-Store database system against other column-oriented database system implementation approaches, and against row-oriented databases. We benchmark two applications. The first application is a typical analytical application for which column-by-column data layout is known to outperform row-by-row data layout. The second application is another emerging application, the Semantic Web, for which column-oriented database systems are not currently used. We find that on the first application, the complete C-Store system performed 10 to 18 times faster than alternative column-store implementation approaches, and 6 to 12 times faster than a commercial database system that uses a row-by-row data layout. On the Semantic Web application, we find that C-Store outperforms other state-of-the-art data management techniques by an order of magnitude, and outperforms other common data management techniques by almost two orders of magnitude. Benchmark queries, which used to take multiple minutes to execute, can now be answered in several seconds.
by Daniel J. Abadi.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fomkin, Ruslan. "Optimization and Execution of Complex Scientific Queries." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9514.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Feilong. "Accelerating Analytical Query Processing with Data Placement Conscious Optimization and RDMA-aware Query Execution." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543532295915722.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ferreira, Miguel C. (Miguel Cacela Rosa Lopes Ferreira). "Compression and query execution within column oriented databases." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33150.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 65-66).
Compression is a known technique used by many database management systems ("DBMS") to increase performance[4, 5, 14]. However, not much research has been done in how compression can be used within column oriented architectures. Storing data in column increases the similarity between adjacent records, thus increase the compressibility of the data. In addition, compression schemes not traditionally used in row-oriented DBMSs can be applied to column-oriented systems. This thesis presents a column-oriented query executor designed to operate directly on compressed data. 'We show that operating directly on compressed data can improve query performance. Additionally, the choice of compression scheme depends on the expected query workload, suggesting that for ad-hoc queries we may wish to store a column redundantly under different coding schemes. Furthermore, the executor is designed to be extensible so that the addition of new compression schemes does not impact operator implementation. The executor is part of a larger database system, known as CStore [10].
by Miguel C. Ferreira.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gupta, Ankush M. "Cross-engine query execution in federated database systems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106013.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
Duggan et al.have created a reference implementation of the BigDAWG system: a new architecture for future Big Data applications, guided by the philosophy that "one size does not fit all." Such applications not only call for large-scale analytics, but also for real-time streaming support, smaller analytics at interactive speeds, data visualization, and cross-storage-system queries. The importance and effectiveness of such a system has been demonstrated in a hospital application using data from an intensive care unit (ICU). In this report, we implement and evaluate a concrete version of a cross-system Query Executor and its interface with a cross-system Query Planner. In particular, we focus on cross-engine shuffle joins within the BigDAWG system.
by Ankush M. Gupta.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Neumann, Thomas. "Efficient generation and execution of DAG-structured query graphs." [S.l.] : [s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11947805.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Narula, Neha. "Distributed query execution on a replicated and partitioned database." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62436.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 63-64).
Web application developers partition and replicate their data amongst a set of SQL databases to achieve higher throughput. Given multiple copies of tables partioned different ways, developers must manually select different replicas in their application code. This work presents Dixie, a query planner and executor which automatically executes queries over replicas of partitioned data stored in a set of relational databases, and optimizes for high throughput. The challenge in choosing a good query plan lies in predicting query cost, which Dixie does by balancing row retrieval costs with the overhead of contacting many servers to execute a query. For web workloads, per-query overhead in the servers is a large part of the overall cost of execution. Dixie's cost calculation tends to minimize the number of servers used to satisfy a query, which is essential for minimizing this query overhead and obtaining high throughput; this is in direct contrast to optimizers over large data sets that try to maximize parallelism by parallelizing the execution of a query over all the servers. Dixie automatically takes advantage of the addition or removal of replicas without requiring changes in the application code. We show that Dixie sometimes chooses plans that existing parallel database query optimizers might not consider. For certain queries, Dixie chooses a plan that gives a 2.3x improvement in overall system throughput over a plan which does not take into account perserver query overhead costs. Using table replicas, Dixie provides a throughput improvement of 35% over a naive execution without replicas on an artificial workload generated by Pinax, an open source social web site.
by Neha Narula.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ling, Daniel Hiak Ong. "Query execution and temporal support in a distributed database system." Thesis, University of Ulster, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Du, Chu-Ming. "A practical approach to set orientated query execution in semistructured databases." Thesis, University of Birmingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412622.

Повний текст джерела
Анотація:
The amount of semistructured data is growing rapidly as the World Wide Web has developed into a central means for sharing and disseminating information. The structure of tree-like semistructured data is not rigid. The most common instance of this type of data is XML. Applications endeavouring to access components of semistructured data are naturally inclined towards a recursive approach to navigate data on trees. However, conventional wisdom indicates that a set-oriented mechanism is necessary for database management systems to obtain good performance in the presence of large amounts of data. Our main objective in this thesis is to develop a set-oriented query execution scheme for XML data. We propose a system, called "Equate" (Execution of Queries Using an Automata Theoretic Engine), which intelligently utilises an automata rewriting scheme to transform a query language into an internal query plan with relational-like operators scheduled in a single process for a set-oriented execution. Our approach contains two phases. The first phase, set-oriented execution, performs queries on edges and binds any variables required. The second phase, reachability analysis, refines the result, filtering out any false matches, and collects sets of variable bindings into a final result structure. " A novel aspect of our approach is that our set-oriented execution, even for complex queries, requires only variants of the relational select, project, and union operators, but no joins.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Apaydin, Tan. "Query Support for Multi-Dimensional and Dynamic Databases." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1221842826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zeuch, Steffen [Verfasser], Johann-Christoph [Gutachter] Freytag, Wolfgang [Gutachter] Lehner, and Stefan [Gutachter] Manegold. "Query Execution on Modern CPUs / Steffen Zeuch ; Gutachter: Johann-Christoph Freytag, Wolfgang Lehner, Stefan Manegold." Berlin : Humboldt-Universität zu Berlin, 2018. http://d-nb.info/1182541461/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Raghavan, Venkatesh. "VAMANA -- A high performance, scalable and cost driven XPath engine." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0505104-185545/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ceran, Erhan. "A C++ Distributed Database Select-project-join Queryprocessor On A Hpc Cluster." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614311/index.pdf.

Повний текст джерела
Анотація:
High performance computer clusters have become popular as they are more scalable, affordable and reliable than their centralized counterparts. Database management systems are particularly suitable for distributed architectures
however distributed DBMS are still not used widely because of the design difficulties. In this study, we aim to help overcome these difficulties by implementing a simulation testbed for a distributed query plan processor. This testbed works on our departmental HPC cluster machine and is able to perform select, project and join operations. A data generation module has also been implemented which preserves the foreign key and primary key constraints in the database schema. The testbed has capability to measure, simulate and estimate the response time of a given query execution plan using specified communication network parameters. Extensive experimental work is performed to show the correctness of the produced results. The estimated execution time costs are also compared with the actual run-times obtained from the testbed to verify the proposed estimation functions. Thus, we make sure that these estimation iv functions can be used in distributed database query optimization and distributed database design tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hahn, Florian [Verfasser], and J. [Akademischer Betreuer] Müller-Quade. "Practical yet Provably Secure: Complex Database Query Execution over Encrypted Data / Florian Hahn ; Betreuer: J. Müller-Quade." Karlsruhe : KIT-Bibliothek, 2019. http://d-nb.info/117852809X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Onder, Ibrahim Seckin. "Execution Of Distributed Database Queries On A Hpc System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/2/12611524/index.pdf.

Повний текст джерела
Анотація:
Increasing performance of computers and ability to connect computers with high speed communication networks make distributed databases systems an attractive research area. In this study, we evaluate communication and data processing capabilities of a HPC machine. We calculate accurate cost formulas for high volume data communication between processing nodes and experimentally measure sorting times. A left deep query plan executer has been implemented and experimentally used for executing plans generated by two different genetic algorithms for a distributed database environment using message passing paradigm to prove that a parallel system can provide scalable performance by increasing the number of nodes used for storing database relations and processing nodes. We compare the performance of plans generated by genetic algorithms with optimal plans generated by exhaustive search algorithm. Our results have verified that optimal plans are better than those of genetic algorithms, as expected.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Muller, Leslie. "'n Ondersoek na en bydraes tot navraaghantering en -optimering deur databasisbestuurstelsels / L. Muller." Thesis, North-West University, 2006. http://hdl.handle.net/10394/1181.

Повний текст джерела
Анотація:
The problems associated with the effective design and uses of databases are increasing. The information contained in a database is becoming more complex and the size of the data is causing space problems. Technology must continually develop to accommodate this growing need. An inquiry was conducted in order to find effective guidelines that could support queries in general in terms of performance and productivity. Two database management systems were researched to compare die theoretical aspects with the techniques implemented in practice. Microsoft SQL Sewer and MySQL were chosen as the candidates and both were put under close scrutiny. The systems were researched to uncover the methods employed by each to manage queries. The query optimizer forms the basis for each of these systems and manages the parsing and execution of any query. The methods employed by each system for storing data were researched. The way that each system manages table joins, uses indices and chooses optimal execution plans were researched. Adjusted algorithms were introduced for various index processes like B+ trees and hash indexes. Guidelines were compiled that are independent of the database management systems and help to optimize relational databases. Practical implementations of queries were used to acquire and analyse the execution plan for both MySQL and SQL Sewer. This plan along with a few other variables such as execution time is discussed for each system. A model is used for both database management systems in this experiment.
Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2007.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Idris, Muhammad. "Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates." Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/284705/5/contratMI.pdf.

Повний текст джерела
Анотація:
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Huber, Frank. "Anfragebearbeitung auf Mehrkern-Rechnerarchitekturen." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16512.

Повний текст джерела
Анотація:
Der Trend zu immer mehr parallelen Recheneinheiten innerhalb eines Prozessors stellt an die Softwareentwicklung neue Herausforderungen. Um die vorhandenen Ressourcen auszulasten und die stetige Steigerung der Parallelität in einen Leistungszuwachs umzusetzen, muss Software von der sequentiellen Verarbeitung in eine hochgradig parallele Verarbeitung übergehen. Diese Arbeit untersucht, wie solch eine parallele Verarbeitung in Bezug auf Relationale Datenbankmanagementsysteme umzusetzen ist. Dazu wird zunächst der gesamte Prozess der Anfragebearbeitung betrachtet und vier Problembereiche identifiziert, die für das Ziel der parallelen Anfragebearbeitung auf Mehrkern-Rechnerarchitekturen maßgeblich sind. Diese Bereiche sind die Hardware selbst, das physische Datenmodell sowie die Anfrageausführung und -optimierung. Diese vier Bereiche werden innerhalb eines Rahmenwerkes betrachtet. Nach einer Einführung, wird sich die Arbeit zunächst mit Grundlagen befassen. Dazu werden die Hardwarebestandteile Speicher und Prozessor betrachtet und ihre Funktionsweise erläutert. Auf diesem Wissen aufbauend, wird ein Hardwaremodell definiert. Es ermöglicht eine von der jeweiligen Hardwarearchitektur unabhängige Softwareentwicklung, ohne den Verlust an Funktionalität und Leistung. Im Weiteren wird das physische Datenmodell untersucht und analysiert, wie das physische Datenmodell eine optimale Anfrageausführung unterstützen kann. Die verwendeten Datenstrukturen müssen dafür einen effizienten und parallelen Zugriff erlauben. Die Analyse führt zur Entwicklung eines neuartigen Indexes, der die datenparallele Abarbeitung nutzt. Gefolgt wird dieser Teil von der Anfrageausführung, in der ein neues Anfrageausführungsmodell entwickelt wird, das auf der Verwendung des Taskkonzepts beruht und eine hohe und sehr leicht gewichtige Parallelität erlaubt. Den Abschluss stellt die Anfrageoptimierung dar, worin verschiedene Ideen für die Optimierung der Ressourcenverwaltung präsentiert werden.
The upcoming generation of many-core architectures poses several new challenges for software development: Software design and software implementation has to change from sequential execution to a highly parallel execution, such that it takes full advantage of the steadily growing number of cores on a single processor. With this thesis, we investigate such highly parallel program execution in the context of relational database management systems (RDBMSs). We consider the complete process of query processing and identify four problem areas which are crucial for efficient parallel query processing on many-core architectures. These four areas are: Hardware, physical data model, query execution, and query optimization. Furthermore, we present a framework which covers all four parts, one after another. First, we give a detailed survey of computer hardware with a special focus on memory and processors. Based on this survey we propose a hardware model. Our abstraction aims to simplify the task of software development on many-core hardware. Based on the hardware model, we investigate physical data models and evaluate how the physical data model may support optimal query execution by providing efficient and parallelizable data structures. Additionally, we design a new index structure that utilizes data parallel execution by using SIMD operations. The next layer within our framework is query execution, for which we present a new task based query execution model. Our query execution model allows for a lightweight parallelism. Finally, we cover query optimization by explaining approaches for optimizing resource utilization on a query local point of view as well as query global point of view.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kamat, Niranjan Ganesh. "Sampling-based Techniques for Interactive Exploration of Large Datasets." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1523552932728325.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Larsson, Markus, and David Ångström. "A Performance Comparison of Auto-Generated GraphQL Server Implementations." Thesis, Linköpings universitet, Tekniska fakulteten, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170825.

Повний текст джерела
Анотація:
As databases and traffic over the internet is becoming larger by the day, the performance of sending information has become a target of great importance. In past years, other software architectural styles such as REST have been used as it is a reliable framework and works really well when one has a dependable internet connection. In 2015, the querying language GraphQL was released by Facebook to the public as an alternative to REST. GraphQL made improvements in fetching data by for example removing the possibility of under- and overfitting. This means that a client only gets the data which they have requested, nothing more, nothing less. To create a GraphQL schema and server implementation requires time, effort and knowledge. This is however a requirement to run GraphQL over your current legacy database. For this reason multiple server implementation tools have been created by vendors to reduce development time and instead auto-generates a GraphQL schema and server implementation using an already existing database. This bachelor thesis will pick, run and compare the benchmarks of the two different server implementation tools Hasura and PostGraphile. This is done using a benchmark methodology based on technical difficulties (choke points). The result of our benchmark suggests that the throughput is larger for Hasura compared to PostGraphile whilst the query execution time as well as query response time is similar. PostGraphile is better at paging without offset as well as ordering, but on all other cases Hasura outperforms PostGraphile or shows similar results.
Linköping GraphQL Benchmark (LinGBM)
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Verlaine, Lionel. "Optimisation des requêtes dans une machine bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066532.

Повний текст джерела
Анотація:
CCette thèse propose des solutions optimisant l'évaluation de questions et la jointure. Ces propositions sont étudiées et mises en œuvre à partir du SGBD Sabrina issu du projet SABRE sur matériel Carrousel à la SAGEM. L'évaluation de questions permet d'optimiser le niveau logique du traitement d'une requête. La décomposition la plus pertinente est établie en fonction d'heuristiques simples. L'algorithme de jointure propose utilise des mécanismes minimisant à la fois le nombre d'entrées/sorties disque et le nombre de comparaisons. Il admet un temps d'exécution proportionnel au nombre de tuples. L'ordonnancement de jointures est résolu par un algorithme original de jointure multi-relations et par une méthode d'ordonnancement associée permettant un haut degré de parallélisme.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Повний текст джерела
Анотація:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Tabbara, Hiba. "Native Language OLAP Query Execution." Thesis, 2012. http://spectrum.library.concordia.ca/974715/1/Tabbara_PhD__F2012.pdf.

Повний текст джерела
Анотація:
Online Analytical Processing (OLAP) applications are widely used in the components of contemporary Decision Support systems. However, existing OLAP query languages are neither efficient nor intuitive for developers. In particular, Microsoft’s Multidimensional Expressions language (MDX), the de-facto standard for OLAP, is essentially a string-based extension to SQL that hinders code refactoring, limits compile-time checking, and provides no object-oriented functionality whatsoever. In this thesis, we present Native language OLAP query eXecution, or NOX, a framework that provides responsive and intuitive query facilities. To this end, we exploit the underlying OLAP conceptual data model and provide a clean integration between the server and the client language. NOX queries are object-oriented and support inheritance, refactoring and compile-time checking. Underlying this functionality is a domain specific algebra and language grammar that are used to transparently convert client side queries written in the native development language into algebraic operations understood by the server. In our prototype of NOX, JAVA is used as the native language. We provide client side libraries that define an API for programmers to use for writing OLAP queries. We investigate the design of NOX through a series of real world query examples. Specifically, we explore the following: fundamental SELECTION and PROJECTION, set operations, hierarchies, parametrization and query inheritance. We compare NOX queries to MDX and show the intuitiveness and robustness of NOX. We also investigate NOX expressiveness with respect to MDX from an algebraic point of view by demonstrating the correspondence of the two approaches in terms of SELECTION and PROJECTION operations. We believe the practical benefit of NOX-style query processing is significant. In short, it largely reduces OLAP database access to the manipulation of client side, in-memory data objects
Стилі APA, Harvard, Vancouver, ISO та ін.
27

"Rxqee - relational-xml query execution engine." Tese, MAXWELL, 2004. http://www.maxwell.lambda.ele.puc-rio.br/cgi-bin/db2www/PRG_0991.D2W/SHOW?Cont=5925:pt&Mat=&Sys=&Nr=&Fun=&CdLinPrg=pt.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Taleb, Ahmad. "Query Optimization and Execution for Multi-Dimensional OLAP." Thesis, 2011. http://spectrum.library.concordia.ca/7388/1/Taleb_PhD_S2011.pdf.

Повний текст джерела
Анотація:
Online Analytical Processing (OLAP) is a database paradigm that supports the rich analysis of multi-dimensional data. While current OLAP tools are primarily constructed as extensions to conventional relational databases, the unique modeling and processing requirements of OLAP systems often make for a relatively awkward fit with RDBM systems in general, and their embedded string-based query languages in particular. In this thesis, we discuss the design, implementation, and evaluation of a robust multi-dimensional OLAP server. In fact, we focus on several distinct but related themes. To begin, we investigate the integration of an open source embedded storage engine with our own OLAP-specific indexing and access methods. We then present a comprehensive OLAP query algebra that ultimately allows developers to create expressive OLAP queries in native client languages such as Java. By utilizing a formal algebraic model, we are able to support an intuitive Object Oriented query API, as well as a powerful query optimization and execution engine. The thesis describes both the optimization methodology and the related algorithms for the efficient execution of the associated query plans. The end result of our research is a comprehensive OLAP DBMS prototype that clearly demonstrates new opportunities for improving the accessibility, functionality, and performance of current OLAP database management systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yu, Chia-Hao, and 余家豪. "A Distributed Query Execution Protocol in Sensor Networks." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/20156514028609322851.

Повний текст джерела
Анотація:
碩士
國立中央大學
資訊工程研究所
92
In the wireless sensor networks, query execution over a specific geographical region is an essential function for collecting sensed data or detecting unusual event. However, sensor nodes deployed in the sensor networks have limited battery power. Hence, how to find a minimum number of connected sensor nodes that are sufficient to cover the queried region is an important issue in the sensor networks. This paper proposes an efficient distributed protocol to find a subset of connected sensor nodes to cover the queried region. Each sensor node in the sensor network determines whether to sense the queried region according to its priority value, which is determined by the remaining power or sensing area within the queried region. The proposed protocol can efficiently construct a subset of connected sensing nodes and fast response the query request in the sensed region. Simulation results show that the proposed protocol is more efficient and consumes less communication overhead than other existing protocol.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Juliana, Hsieh. "An Optimization Strategy for Efficient Query Execution over Streaming Sources." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-1303200709285431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Polychroniou, Orestis. "Analytical Query Execution Optimized for all Layers of Modern Hardware." Thesis, 2018. https://doi.org/10.7916/D8Q25G8H.

Повний текст джерела
Анотація:
Analytical database queries are at the core of business intelligence and decision support. To analyze the vast amounts of data available today, query execution needs to be orders of magnitude faster. Hardware advances have made a profound impact on database design and implementation. The large main memory capacity allows queries to execute exclusively in memory and shifts the bottleneck from disk access to memory bandwidth. In the new setting, to optimize query performance, databases must be aware of an unprecedented multitude of complicated hardware features. This thesis focuses on the design and implementation of highly efficient database systems by optimizing analytical query execution for all layers of modern hardware. The hardware layers include the network across multiple machines, main memory and the NUMA interconnection across multiple processors, the multiple levels of caches across multiple processor cores, and the execution pipeline within each core. For the network layer, we introduce a distributed join algorithm that minimizes the network traffic. For the memory hierarchy, we describe partitioning variants aware to the dynamics of the CPU caches and the NUMA interconnection. To improve the memory access rate of linear scans, we optimize lightweight compression variants and evaluate their trade-offs. To accelerate query execution within the core pipeline, we introduce advanced SIMD vectorization techniques generalizable across multiple operators. We evaluate our algorithms and techniques on both mainstream hardware and on many-integrated-core platforms, and combine our techniques in a new query engine design that can better utilize the features of many-core CPUs. In the era of hardware becoming increasingly parallel and datasets consistently growing in size, this thesis can serve as a compass for developing hardware-conscious databases with truly high-performance analytical query execution.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hsieh, Juliana, and 薛佩如. "An Optimization Strategy for Efficient Query Execution over Streaming Sources." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/70766095466901570073.

Повний текст джерела
Анотація:
碩士
國立清華大學
資訊工程學系
94
Continuous queries over data streams, particularly the joins of streams, have gained popularity as the scope of their applications has increased in the past years. Applications range from network monitoring to sensor processing for environmental monitoring or inventory tracking. The cost of evaluating such queries over streaming sources may vary according to the order in which the joins of streams are processed. In order to lower the cost of executing a query, the query optimizer needs to generate an execution plan that better fits the current conditions of the environment. Existing optimizers try to resolve the above problem by finding a better probing order for multi-way join operators or choosing a better sequence for the binary join operators. However, there are cases where the performance of a hybrid plan (query plan containing both types of operators) exceeds the performance of query plans composed of a single multi-way operator or trees binary join operators. We address the problem of finding a low-cost execution plan in order to execute continuous multi-way join queries over infinite data streams. The search space encompasses plans consisting of a single multi-way operator, plans composed of binary join operators, and hybrid plans. We propose heuristics with a partial cost-based optimization technique to address the three main components of a query optimizer, namely the search space, the cost model and the search strategy. The cost model is used to evaluate all feasible query plans and the heuristics are used to prune the candidates that cannot lead us to good plans. In our work, we evaluate the performance of the proposed approach by comparing the time needed to produce the low-cost query execution plan and quality of our result with the optimum solution and the single multi-way operator with probing order. The result shows that our methodology can find a better plan for the current environment and that is close to the optimal query plan.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Meng, Yabin. "SQL Query Disassembler: An Approach to Managing the Execution of Large SQL Queries." Thesis, 2007. http://hdl.handle.net/1974/701.

Повний текст джерела
Анотація:
In this thesis, we present an approach to managing the execution of large queries that involves the decomposition of large queries into an equivalent set of smaller queries and then scheduling the smaller queries so that the work is accomplished with less impact on other queries. We describe a prototype implementation of our approach for IBM DB2™ and present a set of experiments to evaluate the effectiveness of the approach.
Thesis (Master, Computing) -- Queen's University, 2007-09-17 22:05:05.304
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Neumann, Thomas [Verfasser]. "Efficient generation and execution of DAG-structured query graphs / vorgelegt von Thomas Neumann." 2005. http://d-nb.info/975790420/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Abdul, Khalek Shadi. "Systematic testing using test summaries : effective and efficient testing of relational applications." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4574.

Повний текст джерела
Анотація:
This dissertation presents a novel methodology based on test summaries, which characterize desired tests as constraints written in a mixed imperative and declarative notation, for automated systematic testing of relational applications, such as relational database engines. The methodology has at its basis two novel techniques for effective and efficient testing: (1) mixed-constraint solving, which provides systematic generation of inputs characterized by mixed-constraints using translations among different data domains; and (2) clustered test execution, which optimizes execution of test suites by leveraging similarities in execution traces of different tests using abstract-level undo operations, which allow common segments of partial traces to be executed only once and the execution results to be shared across those tests. A prototype embodiment of the methodology enables a novel approach for systematic testing of commonly used database engines, where test summaries describe (1) input SQL queries, (2) input database tables, and (3) expected output of query execution. An experimental evaluation using the prototype demonstrates its efficacy in systematic testing of relational applications, including Oracle 11g, and finding bugs in them.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Liu, Kevin H. "Skew characteristics and their effects on parallel relational query processing." Thesis, 1997. https://vuir.vu.edu.au/30101/.

Повний текст джерела
Анотація:
As queries grow increasingly complex and large data sets are becoming prevalent, Parallel Query Processing, database sizes grow dramatically particularly in Decision Support Systems (DSS) , and OnLine Analytic Processing Systems ( OIAP) which have recently emerged as important database applications. In these systems, performance is a critical issue and speeding up the system has always been an objective but the processing power of individual processors can only handle a small fraction of current applications. As a result, parallel processing is exploited to improve database systems performance. In the thesis we focus on relational database systems and study skew characteristics and their effects on parallel query processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії