Gotowa bibliografia na temat „Query Executor”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Query Executor”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Query Executor"

1

Huang, Silu, Erkang Zhu, Surajit Chaudhuri i Leonhard Spiegelberg. "T-Rex: Optimizing Pattern Search on Time Series". Proceedings of the ACM on Management of Data 1, nr 2 (13.06.2023): 1–26. http://dx.doi.org/10.1145/3589275.

Pełny tekst źródła
Streszczenie:
Pattern search is an important class of queries for time series data. Time series patterns often match variable-length segments with a large search space, thereby posing a significant performance challenge. The existing pattern search systems, for example, SQL query engines supporting MATCH_RECOGNIZE, are ineffective in pruning the large search space of variable-length segments. In many cases, the issue is due to the use of a restrictive query language modeled on time series points and a computational model that limits search space pruning. We built T-ReX to address this problem using two main building blocks: first, a MATCH_RECOGNIZE language extension that exposes the notion of segment variable and adds new operators, lending itself to better optimization; second, an executor capable of pruning the search space of matches and minimizing total query time using an optimizer. We conducted experiments using 5 real-world datasets and 11 query templates, including those from existing works. T-ReX outperformed an optimized NFA-based pattern search executor by 6x in median query time and an optimized tree-based executor by 19X.
Style APA, Harvard, Vancouver, ISO itp.
2

Yogatama, Bobbi W., Weiwei Gong i Xiangyao Yu. "Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS". Proceedings of the VLDB Endowment 15, nr 11 (lipiec 2022): 2491–503. http://dx.doi.org/10.14778/3551793.3551809.

Pełny tekst źródła
Streszczenie:
There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a semantic-aware fine-grained caching policy which takes into account various aspects of the workload such as query semantics, data correlation, and query frequency when determining data placement between CPU and GPU. Second, we introduce a heterogeneous query executor which can fully exploit data in both CPU and GPU and coordinate query execution at a fine granularity. We integrate both solutions in Mordred, our novel hybrid CPU-GPU data analytics engine. Evaluation on the Star Schema Benchmark shows that the semantic-aware caching policy can outperform the best traditional caching policy by up to 3x. Compared to existing GPU DBMSs, Mordred can outperform by an order of magnitude.
Style APA, Harvard, Vancouver, ISO itp.
3

Barish, G., i C. A. Knoblock. "An Expressive Language and Efficient Execution System for Software Agents". Journal of Artificial Intelligence Research 23 (1.06.2005): 625–66. http://dx.doi.org/10.1613/jair.1548.

Pełny tekst źródła
Streszczenie:
Software agents can be used to automate many of the tedious, time-consuming information processing tasks that humans currently have to complete manually. However, to do so, agent plans must be capable of representing the myriad of actions and control flows required to perform those tasks. In addition, since these tasks can require integrating multiple sources of remote information ? typically, a slow, I/O-bound process ? it is desirable to make execution as efficient as possible. To address both of these needs, we present a flexible software agent plan language and a highly parallel execution system that enable the efficient execution of expressive agent plans. The plan language allows complex tasks to be more easily expressed by providing a variety of operators for flexibly processing the data as well as supporting subplans (for modularity) and recursion (for indeterminate looping). The executor is based on a streaming dataflow model of execution to maximize the amount of operator and data parallelism possible at runtime. We have implemented both the language and executor in a system called THESEUS. Our results from testing THESEUS show that streaming dataflow execution can yield significant speedups over both traditional serial (von Neumann) as well as non-streaming dataflow-style execution that existing software and robot agent execution systems currently support. In addition, we show how plans written in the language we present can represent certain types of subtasks that cannot be accomplished using the languages supported by network query engines. Finally, we demonstrate that the increased expressivity of our plan language does not hamper performance; specifically, we show how data can be integrated from multiple remote sources just as efficiently using our architecture as is possible with a state-of-the-art streaming-dataflow network query engine.
Style APA, Harvard, Vancouver, ISO itp.
4

Yang, Yifei, Matt Youill, Matthew Woicik, Yizhou Liu, Xiangyao Yu, Marco Serafini, Ashraf Aboulnaga i Michael Stonebraker. "FlexPushdownDB". Proceedings of the VLDB Endowment 14, nr 11 (lipiec 2021): 2101–13. http://dx.doi.org/10.14778/3476249.3476265.

Pełny tekst źródła
Streszczenie:
Modern cloud databases adopt a storage-disaggregation architecture that separates the management of computation and storage. A major bottleneck in such an architecture is the network connecting the computation and storage layers. Two solutions have been explored to mitigate the bottleneck: caching and computation pushdown. While both techniques can significantly reduce network traffic, existing DBMSs consider them as orthogonal techniques and support only one or the other, leaving potential performance benefits unexploited. In this paper we present FlexPushdownDB (FPDB) , an OLAP cloud DBMS prototype that supports fine-grained hybrid query execution to combine the benefits of caching and computation pushdown in a storage-disaggregation architecture. We build a hybrid query executor based on a new concept called separable operators to combine the data from the cache and results from the pushdown processing. We also propose a novel Weighted-LFU cache replacement policy that takes into account the cost of pushdown computation. Our experimental evaluation on the Star Schema Benchmark shows that the hybrid execution outperforms both the conventional caching-only architecture and pushdown-only architecture by 2.2X. In the hybrid architecture, our experiments show that Weighted-LFU can outperform the baseline LFU by 37%.
Style APA, Harvard, Vancouver, ISO itp.
5

DAS, ARIYAM, i CARLO ZANIOLO. "A Case for Stale Synchronous Distributed Model for Declarative Recursive Computation". Theory and Practice of Logic Programming 19, nr 5-6 (wrzesień 2019): 1056–72. http://dx.doi.org/10.1017/s1471068419000358.

Pełny tekst źródła
Streszczenie:
AbstractA large class of traditional graph and data mining algorithms can be concisely expressed in Datalog, and other Logic-based languages, once aggregates are allowed in recursion. In fact, for most BigData algorithms, the difficult semantic issues raised by the use of non-monotonic aggregates in recursion are solved byPre-Mappability(${\cal P}$reM), a property that assures that for a program with aggregates in recursion there is an equivalent aggregate-stratified program. In this paper we show that, by bringing together the formal abstract semantics of stratified programs with the efficient operational one of unstratified programs,$\[{\cal P}\]$reMcan also facilitate and improve their parallel execution. We prove that$\[{\cal P}\]$reM-optimized lock-free and decomposable parallel semi-naive evaluations produce the same results as the single executor programs. Therefore,$\[{\cal P}\]$reMcan be assimilated into the data-parallel computation plans of different distributed systems, irrespective of whether these follow bulk synchronous parallel (BSP) or asynchronous computing models. In addition, we show that non-linear recursive queries can be evaluated using a hybrid stale synchronous parallel (SSP) model on distributed environments. After providing a formal correctness proof for the recursive query evaluation with$\[{\cal P}\]$reMunder this relaxed synchronization model, we present experimental evidence of its benefits.
Style APA, Harvard, Vancouver, ISO itp.
6

Paudel, Nawaraj, i Jagdish Bhatta. "Cost-Based Query Optimization in Centralized Relational Databases". Journal of Institute of Science and Technology 24, nr 1 (26.06.2019): 42–46. http://dx.doi.org/10.3126/jist.v24i1.24627.

Pełny tekst źródła
Streszczenie:
Query optimization is the most significant factor for any centralized relational database management system (RDBMS) that reduces the total execution time of a query. Query optimization is the process of executing a SQL (Structured Query Language) query in relational databases to determine the most efficient way to execute a given query by considering the possible query plans. The goal of query optimization is to optimize the given query for the sake of efficiency. Cost-based query optimization compares different strategies based on relative costs (amount of time that the query needs to run) and selects and executes one that minimizes the cost. The cost of a strategy is just an estimate based on how many estimated CPU and I/O resources that the query will use. In this paper, cost is considered by counting number of disk accesses for each query plan because disk access tends to be the dominant cost in query processing for centralized relational databases.
Style APA, Harvard, Vancouver, ISO itp.
7

Wang, Chenxiao, Zach Arani, Le Gruenwald, Laurent d'Orazio i Eleazar Leal. "Re-optimization for Multi-objective Cloud Database Query Processing using Machine Learning". International Journal of Database Management Systems 13, nr 1 (28.02.2021): 21–40. http://dx.doi.org/10.5121/ijdms.2021.13102.

Pełny tekst źródła
Streszczenie:
In cloud environments, hardware configurations, data usage, and workload allocations are continuously changing. These changes make it difficult for the query optimizer of a cloud database management system (DBMS) to select an optimal query execution plan (QEP). In order to optimize a query with a more accurate cost estimation, performing query re-optimizations during the query execution has been proposed in the literature. However, some of there-optimizations may not provide any performance gain in terms of query response time or monetary costs, which are the two optimization objectives for cloud databases, and may also have negative impacts on the performance due to their overheads. This raises the question of how to determine when are-optimization is beneficial. In this paper, we present a technique called ReOptML that uses machine learning to enable effective re-optimizations. This technique executes a query in stages, employs a machine learning model to predict whether a query re-optimization is beneficial after a stage is executed, and invokes the query optimizer to perform the re-optimization automatically. The experiments comparing ReOptML with existing query re-optimization algorithms show that ReOptML improves query response time from 13% to 35% for skew data and from 13% to 21% for uniform data, and improves monetary cost paid to cloud service providers from 17% to 35% on skewdata.
Style APA, Harvard, Vancouver, ISO itp.
8

Sen, Rathijit, Abhishek Roy, Alekh Jindal, Rui Fang, Jeff Zheng, Xiaolei Liu i Ruiping Li. "AutoExecutor". Proceedings of the VLDB Endowment 14, nr 12 (lipiec 2021): 2855–58. http://dx.doi.org/10.14778/3476311.3476362.

Pełny tekst źródła
Streszczenie:
Right-sizing resources for query execution is important for cost-efficient performance, but estimating how performance is affected by resource allocations, upfront, before query execution is difficult. We demonstrate AutoExecutor , a predictive system that uses machine learning models to predict query run times as a function of the number of allocated executors, that limits the maximum allowed parallelism, for Spark SQL queries running on Azure Synapse.
Style APA, Harvard, Vancouver, ISO itp.
9

Beedkar, Kaustubh, David Brekardin, Jorge-Anulfo Quiané-Ruiz i Volker Markl. "Compliant geo-distributed data processing in action". Proceedings of the VLDB Endowment 14, nr 12 (lipiec 2021): 2843–46. http://dx.doi.org/10.14778/3476311.3476359.

Pełny tekst źródła
Streszczenie:
In this paper we present our work on compliant geo-distributed data processing. Our work focuses on the new dimension of dataflow constraints that regulate the movement of data across geographical or institutional borders. For example, European directives may regulate transferring only certain information fields (such as non personal information) or aggregated data. Thus, it is crucial for distributed data processing frameworks to consider compliance with respect to dataflow constraints derived from these regulations. We have developed a compliance-based data processing framework, which (i) allows for the declarative specification of dataflow constraints, (ii) determines if a query can be translated into a compliant distributed query execution plan, and (iii) executes the compliant plan over distributed SQL databases. We demonstrate our framework using a geo-distributed adaptation of the TPC-H benchmark data. Our framework provides an interactive dashboard, which allows users to specify dataflow constraints, and analyze and execute compliant distributed query execution plans.
Style APA, Harvard, Vancouver, ISO itp.
10

Azhir, Elham, Mehdi Hosseinzadeh, Faheem Khan i Amir Mosavi. "Performance Evaluation of Query Plan Recommendation with Apache Hadoop and Apache Spark". Mathematics 10, nr 19 (26.09.2022): 3517. http://dx.doi.org/10.3390/math10193517.

Pełny tekst źródła
Streszczenie:
Access plan recommendation is a query optimization approach that executes new queries using prior created query execution plans (QEPs). The query optimizer divides the query space into clusters in the mentioned method. However, traditional clustering algorithms take a significant amount of execution time for clustering such large datasets. The MapReduce distributed computing model provides efficient solutions for storing and processing vast quantities of data. Apache Spark and Apache Hadoop frameworks are used in the present investigation to cluster different sizes of query datasets in the MapReduce-based access plan recommendation method. The performance evaluation is performed based on execution time. The results of the experiments demonstrated the effectiveness of parallel query clustering in achieving high scalability. Furthermore, Apache Spark achieved better performance than Apache Hadoop, reaching an average speedup of 2x.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Query Executor"

1

Zeuch, Steffen. "Query Execution on Modern CPUs". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19296.

Pełny tekst źródła
Streszczenie:
Über die letzten Jahrzehnte haben sich Datenbanken von festplatten-basierten zu hauptspeicher-basierten Datenbanksystemen entwickelt. Um diese Herausforderungen anzugehen und das volle Potenzial moderner Prozessoren zu erschließen, stellt diese Dissertation vier Ansätze vor um den Einfluss der „Memory Wall“ zu reduzieren. Der erste Ansatz zeigt auf, wie spezielle Prozessorinstruktionen (sogenannte SIMD Instruktionen) die Ausnutzung von Caches erhöhen und gleichzeitig die Anzahl der Instruktionen verringern. In dieser Arbeit werden dazu vorhandene Baumstrukturen so angepasst, dass diese SIMD Instruktionen verwendet werden können und somit die benötigte Hauptspeicherbandbreite verringert wird. Der zweite Ansatz dieser Arbeit führt ein Model ein, welches es ermöglicht die Anfrageausführung in verschiedenen Datenbanksystemen zu vereinheitlichen und dadurch vergleichbar zu machen. Durch diese Vereinheitlichung wird es möglich, die Hardwareausnutzung durch Hinzunahme von Wissen über die auszuführende Hardware zu optimieren. Der dritte Ansatz analysiert verschiedene Datenbankoperatoren bezüglich ihres Verhaltens auf verschiedenen Hardwareumgebungen. Diese Analyse ermöglicht es, Datenbankoperatoren besser zu verstehen und Kostenmodelle für ihr Verhalten zu entwickeln. Der vierte Ansatz dieser Arbeit baut auf der Analyse der Operatoren auf und führt einen progressiven Optimierungsalgorithmus ein, der die Ausführung von Anfragen zur Laufzeit auf die jeweiligen Bedingungen wie z.B. Daten- oder Hardwareeigenschaften anpasst. Dazu werden zur Laufzeit prozessorinterne Zähler verwendet, die das Verhalten des Operators auf der jeweiligen Hardware widerspiegeln.
Over the last decades, database systems have been migrated from disk to memory architectures such as RAM, Flash, or NVRAM. Research has shown that this migration fundamentally shifts the performance bottleneck upwards in the memory hierarchy. Whereas disk-based database systems were largely dominated by disk bandwidth and latency, in-memory database systems mainly depend on the efficiency of faster memory components, e.g., RAM, caches, and registers. To encounter these challenges and enable the full potential of the available processing power of modern CPUs for database systems, this thesis proposes four approaches to reduce the impact of the Memory Wall. First, SIMD instructions increase the cache line utilization and decrease the number of executed instructions if they operate on an appropriate data layout. Thus, we adapt tree structures for processing with SIMD instructions to reduce demands on the memory bus and processing units are decreased. Second, by modeling and executing queries following a unified model, we are able to achieve high resource utilization. Therefore, we propose a unified model that enables us to utilize knowledge about the query plan and the underlying hardware to optimize query execution. Third, we need a fundamental knowledge about the individual database operators and their behavior and requirements to optimally distribute the resources among available computing units. We conduct an in-depth analysis of different workloads using performance counters create these insights. Fourth, we propose a non-invasive progressive optimization approach based on in-depth knowledge of individual operators that is able to optimize query execution during run-time. In sum, using additional run-time statistics gathered by performance counters, a unified model, and SIMD instructions, this thesis improves query execution on modern CPUs.
Style APA, Harvard, Vancouver, ISO itp.
2

AYRES, FAUSTO VERAS MARANHAO. "QEEF: AN EXTENSIBLE QUERY EXECUTION ENGINE". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5110@1.

Pełny tekst źródła
Streszczenie:
CENTRO FEDERAL DE EDUCAÇÃO TECNOLÓGICA CELSO SUCKOW FONSECA
O processamento de consultas em Sistemas de Gerência de Banco de Dados tradicionais tem sido largamente estudado na literatura e utilizado comercialmente com enorme sucesso. Isso é devido, em parte, à eficiência das Máquinas de Execução de Consultas (MEC) no suporte ao modelo de execução tradicional. Porém, o surgimento de novos cenários de aplicação, principalmente em conseqüência do modelo computacional da web, motivou a pesquisa de novos modelos de execução, tais como: modelo adaptável e modelo contínuo, além da pesquisa de modelos de dados semi-estruturados, tal como o XML, ambos não suportados pelas MEC tradicionais. O objetivo desta tese consiste no desenvolvimento de uma MEC extensível frente a diferentes modelos de execução e de dados. Adicionalmente, esta proposta trata de maneira ortogonal o modelo de execução e o modelo de dados, o que permite a avaliação de planos de execução de consultas (PEC) com fragmentos em diferentes modelos. Utilizou-se a técnica de framework de software para a especificação da MEC extensível, produzindo o framework QEEF (Query Execution Engine Framework). A extensibilidade da solução reflete-se em um meta-modelo, denominado QUEM (QUery Execution Meta-model), capaz de exprimir diferentes modelos em um meta-PEC. O framework QEEF pré-processa um meta-PEC e produz um PEC final a ser avaliado pela MEC instanciada. Como parte da validação desta proposta, instanciou-se o QEEF para diferentes modelos de execução e de dados.
Querying processing in traditional Database Management Systems (DBMS) has been extensively studied in the literature and adopted in industry. Such success is, in part, due to the performance of their Query Execution Engines (QEE) for supporting the traditional query execution model. The advent of new query scenarios, mainly due to the web computational model, has motivate the research on new execution models such as: adaptive and continuous, and on semistructured data models, such as XML, both not natively supported by traditional query engines. This thesis proposes the development of an extensible QEE adapted to the new execution and data models. Achieving this goal, we use a software design approach based on framework technique to produce the Query Execution Engine Framework (QEEF). Moreover, we address the question of the orthogonality between execution and data models, witch allows for executing query execution plans (QEP) with fragments in different models. The extensibility of our solution is specified by in a QEP by an execution meta- model named QUEM (QUery Execution Meta-model) used to express different models in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF producing a final QEP to be evaluated by the running QEE. The QEEF is instantiated for different execution and data models as part of the validation of this proposal.
Style APA, Harvard, Vancouver, ISO itp.
3

Lundquist, Andreas. "Combining Result Size Calculation and Query Execution for the GraphQL Query Language". Thesis, Linköpings universitet, Databas och informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167086.

Pełny tekst źródła
Streszczenie:
GraphQL is an open source framework for creating adaptable API layers and was developed and released by Facebook in 2015. GraphQL includes both a query language and an execution engine and has quickly gained popularity among developers. However, GraphQL suffers from a problem, certain types of queries can generate huge response objects that grow exponentially in size. A recent research paper proposed a solution to this problem in the form of an algorithm that calculates the result size of a query without executing the query itself. The algorithm enables the possibility to decide if a query should be executed based on the query response size. In an implementation and evaluation of this algorithm, it was found that for simple queries, running the algorithm takes approximately the same time as executing the query, which means that the total query processing time is doubled. This thesis proposes a way to solve that problem by introducing an extended algorithm that combines the result size calculation with query execution. An implementation of the extended algorithm was evaluated and shows that it is a viable option that only has a minor impact on the total query processing time for most queries.
Style APA, Harvard, Vancouver, ISO itp.
4

Abadi, Daniel J. "Query execution in column-oriented database systems". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43043.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 145-148).
There are two obvious ways to map a two-dimension relational database table onto a one-dimensional storage interface: store the table row-by-row, or store the table column-by-column. Historically, database system implementations and research have focused on the row-by row data layout, since it performs best on the most common application for database systems: business transactional data processing. However, there are a set of emerging applications for database systems for which the row-by-row layout performs poorly. These applications are more analytical in nature, whose goal is to read through the data to gain new insight and use it to drive decision making and planning. In this dissertation, we study the problem of poor performance of row-by-row data layout for these emerging applications, and evaluate the column-by-column data layout opportunity as a solution to this problem. There have been a variety of proposals in the literature for how to build a database system on top of column-by-column layout. These proposals have different levels of implementation effort, and have different performance characteristics. If one wanted to build a new database system that utilizes the column-by-column data layout, it is unclear which proposal to follow. This dissertation provides (to the best of our knowledge) the only detailed study of multiple implementation approaches of such systems, categorizing the different approaches into three broad categories, and evaluating the tradeoffs between approaches. We conclude that building a query executer specifically designed for the column-by-column query layout is essential to archive good performance. Consequently, we describe the implementation of C-Store, a new database system with a storage layer and query executer built for column-by-column data layout. We introduce three new query execution techniques that significantly improve performance. First, we look at the problem of integrating compression and execution so that the query executer is capable of directly operating on compressed data. This improves performance by improving I/O (less data needs to be read off disk), and CPU (the data need not be decompressed). We describe our solution to the problem of executer extensibility - how can new compression techniques be added to the system without having to rewrite the operator code? Second, we analyze the problem of tuple construction (stitching together attributes from multiple columns into a row-oriented "tuple").
(cont.) Tuple construction is required when operators need to access multiple attributes from the same tuple; however, if done at the wrong point in a query plan, a significant performance penalty is paid. We introduce an analytical model and some heuristics to use that help decide when in a query plan tuple construction should occur. Third, we introduce a new join technique, the "invisible join" that improves performance of a specific type of join that is common in the applications for which column-by-column data layout is a good idea. Finally, we benchmark performance of the complete C-Store database system against other column-oriented database system implementation approaches, and against row-oriented databases. We benchmark two applications. The first application is a typical analytical application for which column-by-column data layout is known to outperform row-by-row data layout. The second application is another emerging application, the Semantic Web, for which column-oriented database systems are not currently used. We find that on the first application, the complete C-Store system performed 10 to 18 times faster than alternative column-store implementation approaches, and 6 to 12 times faster than a commercial database system that uses a row-by-row data layout. On the Semantic Web application, we find that C-Store outperforms other state-of-the-art data management techniques by an order of magnitude, and outperforms other common data management techniques by almost two orders of magnitude. Benchmark queries, which used to take multiple minutes to execute, can now be answered in several seconds.
by Daniel J. Abadi.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
5

Fomkin, Ruslan. "Optimization and Execution of Complex Scientific Queries". Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9514.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Feilong. "Accelerating Analytical Query Processing with Data Placement Conscious Optimization and RDMA-aware Query Execution". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1543532295915722.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ferreira, Miguel C. (Miguel Cacela Rosa Lopes Ferreira). "Compression and query execution within column oriented databases". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33150.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 65-66).
Compression is a known technique used by many database management systems ("DBMS") to increase performance[4, 5, 14]. However, not much research has been done in how compression can be used within column oriented architectures. Storing data in column increases the similarity between adjacent records, thus increase the compressibility of the data. In addition, compression schemes not traditionally used in row-oriented DBMSs can be applied to column-oriented systems. This thesis presents a column-oriented query executor designed to operate directly on compressed data. 'We show that operating directly on compressed data can improve query performance. Additionally, the choice of compression scheme depends on the expected query workload, suggesting that for ad-hoc queries we may wish to store a column redundantly under different coding schemes. Furthermore, the executor is designed to be extensible so that the addition of new compression schemes does not impact operator implementation. The executor is part of a larger database system, known as CStore [10].
by Miguel C. Ferreira.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
8

Gupta, Ankush M. "Cross-engine query execution in federated database systems". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106013.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 47-48).
Duggan et al.have created a reference implementation of the BigDAWG system: a new architecture for future Big Data applications, guided by the philosophy that "one size does not fit all." Such applications not only call for large-scale analytics, but also for real-time streaming support, smaller analytics at interactive speeds, data visualization, and cross-storage-system queries. The importance and effectiveness of such a system has been demonstrated in a hospital application using data from an intensive care unit (ICU). In this report, we implement and evaluate a concrete version of a cross-system Query Executor and its interface with a cross-system Query Planner. In particular, we focus on cross-engine shuffle joins within the BigDAWG system.
by Ankush M. Gupta.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
9

Neumann, Thomas. "Efficient generation and execution of DAG-structured query graphs". [S.l.] : [s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11947805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Narula, Neha. "Distributed query execution on a replicated and partitioned database". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62436.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 63-64).
Web application developers partition and replicate their data amongst a set of SQL databases to achieve higher throughput. Given multiple copies of tables partioned different ways, developers must manually select different replicas in their application code. This work presents Dixie, a query planner and executor which automatically executes queries over replicas of partitioned data stored in a set of relational databases, and optimizes for high throughput. The challenge in choosing a good query plan lies in predicting query cost, which Dixie does by balancing row retrieval costs with the overhead of contacting many servers to execute a query. For web workloads, per-query overhead in the servers is a large part of the overall cost of execution. Dixie's cost calculation tends to minimize the number of servers used to satisfy a query, which is essential for minimizing this query overhead and obtaining high throughput; this is in direct contrast to optimizers over large data sets that try to maximize parallelism by parallelizing the execution of a query over all the servers. Dixie automatically takes advantage of the addition or removal of replicas without requiring changes in the application code. We show that Dixie sometimes chooses plans that existing parallel database query optimizers might not consider. For certain queries, Dixie chooses a plan that gives a 2.3x improvement in overall system throughput over a plan which does not take into account perserver query overhead costs. Using table replicas, Dixie provides a throughput improvement of 35% over a naive execution without replicas on an artificial workload generated by Pinax, an open source social web site.
by Neha Narula.
S.M.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Query Executor"

1

Ling, Daniel Hiak Ong. Query execution and temporal support in a distributed database system. [s.l: The Author], 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Polychroniou, Orestis. Analytical Query Execution Optimized for all Layers of Modern Hardware. [New York, N.Y.?]: [publisher not identified], 2018.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Krogh, Jesper Wisborg. MySQL 8 Query Performance Tuning: A Systematic Method for Improving Execution Speeds. Apress L. P., 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

MARROW, Jack. As Melhores Ideias de NegÓcios Da Classe: Ideias de Pequenos NegÓcios para Quem Quer Executar Seu PrÓprio NegÓcio. Independently Published, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Query Executor"

1

Sharygin, Eugene, Ruben Buchatskiy, Roman Zhuykov i Arseny Sher. "Runtime Specialization of PostgreSQL Query Executor". W Lecture Notes in Computer Science, 375–86. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-74313-4_27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Dombrovskaya, Henrietta, Boris Novikov i Anna Bailliekova. "Understanding Execution Plans". W PostgreSQL Query Optimization, 43–55. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6885-8_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bell, Charles. "Query Execution". W Expert MySQL, 543–85. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4660-2_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Krogh, Jesper Wisborg. "Basic Query Execution". W MySQL Connector/Python Revealed, 83–132. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3694-9_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Krogh, Jesper Wisborg. "Advanced Query Execution". W MySQL Connector/Python Revealed, 133–221. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3694-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

L’Esteve, Ron. "Adaptive Query Execution". W The Azure Data Lakehouse Toolkit, 327–38. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8233-5_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Fritchey, Grant. "Execution Plan Generation". W SQL Server Query Performance Tuning, 269–81. Berkeley, CA: Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6742-3_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Fritchey, Grant. "Execution Plan Cache Behavior". W SQL Server Query Performance Tuning, 283–309. Berkeley, CA: Apress, 2014. http://dx.doi.org/10.1007/978-1-4302-6742-3_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Fritchey, Grant. "Execution Plan Generation". W SQL Server 2017 Query Performance Tuning, 451–70. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3888-2_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Korotkevitch, Dmitri. "Query Optimization and Execution". W Pro SQL Server Internals, 463–89. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1964-5_25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Query Executor"

1

Gurumurthy, Bala, David Broneske, Gabriel Campero Durand, Thilo Pionteck i Gunter Saake. "ADAMANT: A Query Executor with Plug-In Interfaces for Easy Co-processor Integration". W 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00093.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Song, Chunyao, Zheng Li, Tingjian Ge i Jie Wang. "Query execution timing". W the 22nd ACM international conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2505515.2505736.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Pomares, Alexandra. "Distributed query execution adaptation". W 2011 6th Colombian Computing Congress (CCC). IEEE, 2011. http://dx.doi.org/10.1109/colomcc.2011.5936301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kyu, Khin Myat, i Aung Nway Oo. "Enhancement of Query Execution Time in SPARQL Query Processing". W 2020 International Conference on Advanced Information Technologies (ICAIT). IEEE, 2020. http://dx.doi.org/10.1109/icait51105.2020.9261805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Allenstein, Brett, Andrew Yost, Paul Wagner i Joline Morrison. "A query simulation system to illustrate database query execution". W the 39th SIGCSE technical symposium. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1352135.1352301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Mühlbauer, Tobias, Wolf Rödiger, Robert Seilbeck, Alfons Kemper i Thomas Neumann. "Heterogeneity-conscious parallel query execution". W the Tenth International Workshop. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2619228.2619230.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Tang, Dixin, Zechao Shang, Aaron J. Elmore, Sanjay Krishnan i Michael J. Franklin. "Thrifty Query Execution via Incrementability". W SIGMOD/PODS '20: International Conference on Management of Data. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3318464.3389756.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kumar, Raju Ranjan, i Muzzammil Hussain. "Query Execution over Encrypted Database". W 2015 Second International Conference on Advances in Computing and Communication Engineering (ICACCE). IEEE, 2015. http://dx.doi.org/10.1109/icacce.2015.13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ganguly, Sumit, Waqar Hasan i Ravi Krishnamurthy. "Query optimization for parallel execution". W the 1992 ACM SIGMOD international conference. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/130283.130291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Verma, Pulkit. "Data Efficient Algorithms and Interpretability Requirements for Personalized Assessment of Taskable AI Systems". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/693.

Pełny tekst źródła
Streszczenie:
The vast diversity of internal designs of black-box AI systems and their nuanced zones of safe functionality make it difficult for a layperson to use them without unintended side effects. The focus of my dissertation is to develop algorithms and requirements of interpretability that would enable a user to assess and understand the limits of an AI system's safe operability. We develop an assessment module that lets an AI system execute high-level instruction sequences in simulators and answer the user queries about its execution of sequences of actions. Our results show that such a primitive query-response capability is sufficient to efficiently derive a user-interpretable model of the system in stationary, fully observable, and deterministic settings.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Query Executor"

1

Koopmann, Patrick. Actions with Conjunctive Queries: Projection, Conflict Detection and Verification. Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.243.

Pełny tekst źródła
Streszczenie:
Description Logic actions specify adaptations of description logic interpretations based on some preconditions defined using a description logic. We consider DL actions in which preconditions can be specified using DL axioms as well as using conjunctive queries, and combinatiosn thereof. We investigate complexity bounds for the executability and the projection problem for these actions, which respectively ask whether an action can be executed on models of an interpretation, and which entailments are satisfied after an action has been executed on this model. In addition, we consider a set of new reasoning tasks concerned with conflicts and interactions that may arise if two action are executed at the same time. Since these problems have not been investigated before for Description Logic actions, we investigate the complexity of these tasks both for actions with conjunctive queries and without those. Finally, we consider the verification problem for Golog programs formulated over our famility of actions. Our complexity analysis considers several expressive DLs, and we provide tight complexity bounds for those for which the exact complexity of conjunctive query entailment is known.
Style APA, Harvard, Vancouver, ISO itp.
2

Harkema, Marcel, Dick Quartel, Rob van der Mei i Bart Gijsen. JPMT: A Java Performance Monitoring Tool. Centre for Telematics and Information Technology (CTIT), 2003. http://dx.doi.org/10.3990/1.5152400.

Pełny tekst źródła
Streszczenie:
This paper describes our Java Performance Monitoring Toolkit (JPMT), which is developed for detailed analysis of the behavior and performance of Java applications. JPMT represents internal execution behavior of Java applications by event traces, where each event represents the occurrence of some activity, such as thread creation, method invocation, and locking contention. JPMT supports event filtering during and after application execution. Each event is annotated by high-resolution performance attributes, e.g., duration of locking contention and CPU time usage by method invocations. JPMT is an open toolkit, its event trace API can be used to develop custom performance analysis applications. JPMT comes with an event trace visualizer and a command-line event trace query tool for scripting purposes. The instrumentation required for monitoring the application is added transparently to the user during run-time. Overhead is minimized by only instrumenting for events the user is interested in and by careful implementation of the instrumentation itself.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii