Gotowa bibliografia na temat „OLAP Workload”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „OLAP Workload”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "OLAP Workload"

1

Camilleri, Carl, Joseph G. Vella i Vitezslav Nezval. "HTAP With Reactive Streaming ETL". Journal of Cases on Information Technology 23, nr 4 (październik 2021): 1–19. http://dx.doi.org/10.4018/jcit.20211001.oa10.

Pełny tekst źródła
Streszczenie:
In database management systems (DBMSs), query workloads can be classified as online transactional processing (OLTP) or online analytical processing (OLAP). These often run within separate DBMSs. In hybrid transactional and analytical processing (HTAP), both workloads may execute within the same DBMS. This article shows that it is possible to run separate OLTP and OLAP DBMSs, and still support timely business decisions from analytical queries running off fresh transactional data. Several setups to manage OLTP and OLAP workloads are analysed. Then, benchmarks on two industry standard DBMSs empirically show that, under an OLTP workload, a row-store DBMS sustains a 1000 times higher throughput than a columnar DBMS, whilst OLAP queries are more than 4 times faster on a columnar DBMS. Finally, a reactive streaming ETL pipeline is implemented which connects these two DBMSs. Separate benchmarks show that OLTP events can be streamed to an OLAP database within a few seconds.
Style APA, Harvard, Vancouver, ISO itp.
2

Rehrmann, Robin, Carsten Binnig, Alexander Böhm, Kihong Kim i Wolfgang Lehner. "Sharing opportunities for OLTP workloads in different isolation levels". Proceedings of the VLDB Endowment 13, nr 10 (czerwiec 2020): 1696–708. http://dx.doi.org/10.14778/3401960.3401967.

Pełny tekst źródła
Streszczenie:
OLTP applications are usually executed by a high number of clients in parallel and are typically faced with high throughput demand as well as a constraint latency requirement for individual statements. Interestingly, OLTP workloads are often read-heavy and comprise similar query patterns, which provides a potential to share work of statements belonging to different transactions. Consequently, OLAP techniques for sharing work have started to be applied also to OLTP workloads, lately. In this paper, we present an approach for merging read statements within interactively submitted multi-statement transactions consisting of reads and writes. We first define a formal framework for merging transactions running under a given isolation level and provide insights into a prototypical implementation of merging within a commercial database system. In our experimental evaluation, we show that, depending on the isolation level, the load in the system and the read-share of the workload, an improvement of the transaction throughput by up to a factor of 2.5X is possible without compromising the transactional semantics.
Style APA, Harvard, Vancouver, ISO itp.
3

Gaffney, Kevin P., Martin Prammer, Larry Brasfield, D. Richard Hipp, Dan Kennedy i Jignesh M. Patel. "SQLite". Proceedings of the VLDB Endowment 15, nr 12 (sierpień 2022): 3535–47. http://dx.doi.org/10.14778/3554821.3554842.

Pełny tekst źródła
Streszczenie:
In the two decades following its initial release, SQLite has become the most widely deployed database engine in existence. Today, SQLite is found in nearly every smartphone, computer, web browser, television, and automobile. Several factors are likely responsible for its ubiquity, including its in-process design, standalone codebase, extensive test suite, and cross-platform file format. While it supports complex analytical queries, SQLite is primarily designed for fast online transaction processing (OLTP), employing row-oriented execution and a B-tree storage format. However, fueled by the rise of edge computing and data science, there is a growing need for efficient in-process online analytical processing (OLAP). DuckDB, a database engine nicknamed "the SQLite for analytics", has recently emerged to meet this demand. While DuckDB has shown strong performance on OLAP benchmarks, it is unclear how SQLite compares. Furthermore, we are aware of no work that attempts to identify root causes for SQLite's performance behavior on OLAP workloads. In this paper, we discuss SQLite in the context of this changing workload landscape. We describe how SQLite evolved from its humble beginnings to the full-featured database engine it is today. We evaluate the performance of modern SQLite on three benchmarks, each representing a different flavor of in-process data management, including transactional, analytical, and blob processing. We delve into analytical data processing on SQLite, identifying key bottlenecks and weighing potential solutions. As a result of our optimizations, SQLite is now up to 4.2X faster on SSB. Finally, we discuss the future of SQLite, envisioning how it will evolve to meet new demands and challenges.
Style APA, Harvard, Vancouver, ISO itp.
4

Rodríguez-Mazahua, Nidia, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, Giner Alor-Hernández i Isaac Machorro-Cano. "Decision-Tree-Based Horizontal Fragmentation Method for Data Warehouses". Applied Sciences 12, nr 21 (28.10.2022): 10942. http://dx.doi.org/10.3390/app122110942.

Pełny tekst źródła
Streszczenie:
Data warehousing gives frameworks and means for enterprise administrators to methodically prepare, comprehend, and utilize the data to improve strategic decision-making skills. One of the principal challenges to data warehouse designers is fragmentation. Currently, several fragmentation approaches for data warehouses have been developed since this technique can decrease the OLAP (online analytical processing) query response time and it provides considerable benefits in table loading and maintenance tasks. In this paper, a horizontal fragmentation method, called FTree, that uses decision trees to fragment data warehouses is presented to take advantage of the effectiveness that this technique provides in classification. FTree determines the OLAP queries with major relevance, evaluates the predicates found in the workload, and according to this, builds the decision tree to select the horizontal fragmentation scheme. To verify that the design is correct, the SSB (star schema benchmark) was used in the first instance; later, a tourist data warehouse was built, and the fragmentation method was tested on it. The results of the experiments proved the efficacy of the method.
Style APA, Harvard, Vancouver, ISO itp.
5

Szárnyas, Gábor, Jack Waudby, Benjamin A. Steer, Dávid Szakállas, Altan Birler, Mingxi Wu, Yuchen Zhang i Peter Boncz. "The LDBC Social Network Benchmark". Proceedings of the VLDB Endowment 16, nr 4 (grudzień 2022): 877–90. http://dx.doi.org/10.14778/3574245.3574270.

Pełny tekst źródła
Streszczenie:
The Social Network Benchmark's Business Intelligence workload (SNB BI) is a comprehensive graph OLAP benchmark targeting analytical data systems capable of supporting graph workloads. This paper marks the finalization of almost a decade of research in academia and industry via the Linked Data Benchmark Council (LDBC). SNB BI advances the state-of-the art in synthetic and scalable analytical database benchmarks in many aspects. Its base is a sophisticated data generator, implemented on a scalable distributed infrastructure, that produces a social graph with small-world phenomena, whose value properties follow skewed and correlated distributions and where values correlate with structure. This is a temporal graph where all nodes and edges follow lifespan-based rules with temporal skew enabling realistic and consistent temporal inserts and (recursive) deletes. The query workload exploiting this skew and correlation is based on LDBC's "choke point"-driven design methodology and will entice technical and scientific improvements in future (graph) database systems. SNB BI includes the first adoption of "parameter curation" in an analytical benchmark, a technique that ensures stable runtimes of query variants across different parameter values. Two performance metrics characterize peak single-query performance (power) and sustained concurrent query throughput. To demonstrate the portability of the benchmark, we present experimental results on a relational and a graph DBMS. Note that these do not constitute an official LDBC Benchmark Result - only audited results can use this trademarked term.
Style APA, Harvard, Vancouver, ISO itp.
6

Pisano, Valentina Indelli, Michele Risi i Genoveffa Tortora. "How reduce the View Selection Problem through the CoDe Modeling". Journal on Advances in Theoretical and Applied Informatics 2, nr 2 (21.12.2016): 19. http://dx.doi.org/10.26729/jadi.v2i2.2090.

Pełny tekst źródła
Streszczenie:
Big Data visualization is not an easy task due to the sheer amount of information contained in data warehouses. Then the accuracy on data relationships in a representation becomes one of the most crucial aspects to perform business knowledge discovery. A tool that allows to model and visualize information relationships between data is CoDe, which by processing several queries on a data-mart, generates a visualization of such data. However on a large data warehouse, the computation of these queries increases the response time by the query complexity. A common approach to speed up data warehousing is precompute a set of materialized views, store in the warehouse and use them to compute the workload queries. The goal and the objectives of this paper are to present a new process exploiting the CoDe modeling through determining the minimal number of required OLAP queries and to mitigate the problem of view selection, i.e., select the optimal set of materialized views. In particular, the proposed process determines the minimal number of required OLAP queries, creates an ad hoc lattice structure to represent them, and selects on such structure the views to be materialized taking into account an heuristic based on the processing time cost and the view storage space. The results of an experiment on a real data warehouse show an improvement in the range of 36-98% with respect the approach that does not consider materialized views, and 7% wrt. an approach that exploits them. Moreover, we have shown how the results are affected by the lattice structure.
Style APA, Harvard, Vancouver, ISO itp.
7

Nurieva, L. M., i S. G. Kiselev. "Workload and salary as determinants of pedagogical graduates’ employment by occupation". Education and science journal 23, nr 10 (15.12.2021): 100–128. http://dx.doi.org/10.17853/1994-5639-2021-10-100-128.

Pełny tekst źródła
Streszczenie:
Introduction. The problem of the effectiveness of the pedagogical education system remains one of the most discussed topics in modern professional discourse. A large proportion of leaders of the educational industry and representatives of the expert community are convinced that graduates of pedagogical universities and colleges are not sufficiently prepared for independent professional activity and deflect from work in their field. The refusal of graduates to be employed by occupation is associated mainly with the poor quality of students’ practical training in universities. Criticising the low attendance rates of young professionals in schools, analysts often ignore the conditions of employment in educational institutions. Meanwhile, the appearance of a new form of statistical observation of OO-1 and OO-2 on open access makes it possible not only to track the results of employment of graduates of the vocational education system in schools across the country, but also to compare them with the conditions for hiring: the current workload on staff and the level of teachers’ salaries.The aim of the present research was to find the dependence of the results of pedagogical graduates’ employment by occupation on the conditions of employment in schools: the level of teachers’ salaries and workload in the regions of Russia.Methodology and research methods. The research methodological framework is a structural approach based on applied research procedures (observation, description, comparison, counting, measurement, modelling, etc.), according to which general scientific (comparative, retrospective analysis, systematisation, generalisation) and statistical research methods (statistical and correlation analysis, etc.) were employed. The analysis of official documents of educational authorities of different levels and educational institutions, scientific publications and forms of federal statistical observation of OO-1 and OO-2 was conducted. The processing of regional educational statistics was carried out using the Online Analytical Processing technology, which makes it possible to obtain OLAP cubes and form analytical slices in accordance with emerging research tasks. The slices were studied in detail using the Analysis ToolPak addin procedures and the statistical functions of the Excel library. In the course of the analysis, the authors performed the calculation of measures of the statistical relationship between the studied variables and their graphic visualisation.Results. This study established a high level of pedagogical labour market segmentation by territorial and qualification-age criteria of employees. The effects of influence on the part of state and public institutions and practices, leading to systematic discrimination of certain groups of educators, were revealed. Regional data provide the examples of discrimination against young teachers in remuneration, both in the process of employment due to the lack of qualification grades, and in the process of work, as a consequence of the inaccessibility to payments from the incentive part of the salary fund of schools. It is shown that the improvement of the results of pedagogical graduates’ employment by occupation is related to the improvement of the wage system, increasing the base rates and reducing the intra-industry differentiation of earnings among workers of different ages.Practical significance. The authors are convinced that this article will clarify the approaches for adjusting the mechanism for distributing the wages fund of schools, develop measures to attract young people to teaching and ensure the elimination of the shortage of personnel in the education system.
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Jianying, Tongliang Li, Haoze Song, Xinjun Yang, Wenchao Zhou, Feifei Li, Baoyue Yan i in. "PolarDB-IMCI: A Cloud-Native HTAP Database System at Alibaba". Proceedings of the ACM on Management of Data 1, nr 2 (13.06.2023): 1–25. http://dx.doi.org/10.1145/3589785.

Pełny tekst źródła
Streszczenie:
Cloud-native databases have become the de-facto choice for mission-critical applications on the cloud due to the need for high availability, resource elasticity, and cost efficiency. Meanwhile, driven by the increasing connectivity between data generation and analysis, users prefer a single database to efficiently process both OLTP and OLAP workloads, which enhances data freshness and reduces the complexity of data synchronization and the overall business cost. In this paper, we summarize five crucial design goals for a cloud-native HTAP database based on our experience and customers' feedback, i.e., transparency, competitive OLAP performance, minimal perturbation on OLTP workloads, high data freshness, and excellent resource elasticity. As our solution to realize these goals, we present PolarDB-IMCI, a cloud-native HTAP database system designed and deployed at Alibaba Cloud. Our evaluation results show that PolarDB-IMCI is able to handle HTAP efficiently on both experimental and production workloads; notably, it speeds up analytical queries up to ×149 on TPC-H (100GB). PolarDB-IMCI introduces low visibility delay and little performance perturbation on OLTP workloads (<5%), and resource elasticity can be achieved by scaling out in tens of seconds.
Style APA, Harvard, Vancouver, ISO itp.
9

Lee, Juchang, SeungHyun Moon, Kyu Hwan Kim, Deok Hoe Kim, Sang Kyun Cha i Wook-Shin Han. "Parallel replication across formats in SAP HANA for scaling out mixed OLTP/OLAP workloads". Proceedings of the VLDB Endowment 10, nr 12 (sierpień 2017): 1598–609. http://dx.doi.org/10.14778/3137765.3137767.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Lee, Juchang, Wook-Shin Han, Hyoung Jun Na, Chang Gyoo Park, Kyu Hwan Kim, Deok Hoe Kim, Joo Yeon Lee, Sang Kyun Cha i SeungHyun Moon. "Parallel replication across formats for scaling out mixed OLTP/OLAP workloads in main-memory databases". VLDB Journal 27, nr 3 (16.04.2018): 421–44. http://dx.doi.org/10.1007/s00778-018-0503-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "OLAP Workload"

1

Paulin, James R. "Performance evaluation of concurrent OLTP and DSS workloads in a single database system". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ27065.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Paulin, James R. (James Robson) 1952 Carleton University Dissertation Computer Science. "Performance evaluation of concurrent OLTP and DSS workloads in a single database system". Ottawa.:, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Nilsson, Victor. "Evaluating Mitigations For Meltdown and Spectre : Benchmarking performance of mitigations against database management systems with OLTP workload". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15254.

Pełny tekst źródła
Streszczenie:
With Spectre and Meltdown out in the public, a rushed effort was made to patch these vulnerabilities by operating system vendors. However, with the mitigations against said vulnerabilities there will be some form of performance impact. This study aims to find out how much of an impact the software mitigations against Spectre and Meltdown have on database management systems during an online transaction processing workload. An experiment was carried out to evaluate two popular open-source database management systems and see how they were affected before and after the software mitigations against Spectre and Meltdown was applied. The study found that there is an average of 4-5% impact on the performance when the software mitigations is applied. The study also compared the two database management systems with each other and found that PostgreSQL can have a reduced performance of about 27% when both a hypervisor and the operating system is patched against Spectre and Meltdown.
När Spectre och Meltdown tillkännagavs gjordes en snabb insats för att korrigera dessa sårbarheter av operativsystemleverantörer. Men med mildringarna mot dessa sårbarheter kommer det att finnas någon form av prestationspåverkan. Denna studie syftar till att ta reda på hur mycket av en påverkan uppdateringarna mot Spectre och Meltdown har på databashanteringssystem under en online-transaktionsbehandlings arbetsbelastning. Ett experiment gjordes för att utvärdera två populära databashanteringssystem baserad på fri mjukvara och se hur de påverkades före och efter att uppdateringarna mot Spectre och Meltdown applicerats i en Linux maskin. Studien fann att det i genomsnitt är 4–5% påverkan på prestandan när uppdateringarna tillämpas. Studien jämförde också de två databashanteringssystemen med varandra och fann att PostgreSQL kan ha en reducerad prestanda på cirka 27% när både det virtuella maskinhanteringssystemet och operativsystemet är uppdaterad mot Spectre och Meltdown.
Style APA, Harvard, Vancouver, ISO itp.
4

Rajkumar, S. "Enhancing Coverage and Robustness of Database Generators". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5528.

Pełny tekst źródła
Streszczenie:
Generating synthetic databases that capture essential data characteristics of client databases is a common requirement for enterprise database vendors. This need stems from a variety of use-cases, such as application testing and assessing performance impacts of planned engine upgrades. A rich body of literature exists in this area, spanning from the early techniques that simply generated data ab-initio to the contemporary ones that use a predefined client query workload to guide the data generation. In the latter category, the aim specifically is to ensure volumetric similarity -- that is, assuming a common choice of query execution plans at the client and vendor sites, the output row cardinalities of individual operators in these plans are similar in the original and synthetic databases. Hydra is a recently proposed data regeneration framework that provides volumetric similarity. In addition, it also provides a mechanism to generate data dynamically during query execution, using a minuscule database summary. Notwithstanding its desirable characteristics, Hydra has the following critical limitations: (a) limited scope of SQL operators in the input query workload, (b) poor scalability with respect to the number of queries in the input workload, and (c) poor volumetric similarity on unseen queries. The data generation algorithm internally uses a linear programming (LP) solver that throttles the workload scalability. This not only puts a threshold on the training (seen) workload size but also reduces the accuracy for test (unseen) queries. Robustness towards test queries is further adversely affected by design choices such as a lack of preference among candidate synthetic databases, and artificial skew in the generated data. In this work, we present an enhanced version of Hydra, called High-Fidelity Hydra (HF-Hydra), which attempts to address the above limitations. To start with, we expand the SQL operator coverage to also include the LIKE operator, and, in certain restricted settings, projection-based operators such as GROUP BY and DISTINCT. To sidestep the challenge of workload scalability, HF-Hydra outputs not one, but a suite of database summaries such that they collectively cover the entire input workload. The division of the workload into the associated sub-workloads is governed by heuristics that aim to balance robustness with LP solvability. For generating richer database summaries, HF-Hydra additionally exploits metadata statistics maintained by the database engine. Further, the database query optimizer is leveraged to make the choice among the various candidate databases. The data generation is also augmented to provide greater diversity in the represented values. Finally, when a test query is fired, HF-Hydra directs it to the database summary that is expected to provide the highest volumetric similarity. We have experimentally evaluated HF-Hydra on a customized set of queries based on the TPC-DS decision-support benchmark framework. We first evaluated the specialized case where each training query has its own summary, and here HF-Hydra achieves perfect volumetric similarity. Further, each summary construction took just under a second and the summary sizes were just in the order of a few tens of kilobytes. Also, our dynamic generation technique produced gigabytes of data in just a few seconds. For the general setting of a limited set of summaries representing the training query workload, the data generated by HF-Hydra was compared with that from Hydra. We observed that HF-Hydra delivers more than forty percent better accuracy for outputs from filter nodes in the plans, while also achieving an improvement of about twenty percent with regard to join nodes. Further, the degradation in volumetric similarity is minor as compared to the one-summary scenario, while the summary production is significantly more efficient due to reduced overheads on the LP solver. In summary, HF-Hydra takes a substantive step forward with regard to creating expressive, robust, and scalable data regeneration frameworks with immediate relevance to testing deployments.
Style APA, Harvard, Vancouver, ISO itp.
5

Katsuno, Ian. "SD Storage Array: Development and Characterization of a Many-device Storage Architecture". Thesis, 2013. http://hdl.handle.net/1807/42978.

Pełny tekst źródła
Streszczenie:
Transactional workloads have storage request streams consisting of many small, independent, random requests. Flash memory is well suited to these types of access patterns, but is not always cost-effective. This thesis presents a novel storage architecture called the SD Storage Array (SDSA), which adopts a many-device approach. It utilizes many flash storage devices in the form of an array of Secure Digital (SD) cards. This approach leverages the commodity status of SD cards to pursue a cost-effective means of providing the high throughput that transactional workloads require. Characterization of a prototype revealed that when the request stream was 512B randomly addressed reads, the SDSA provided 1.5 times the I/O operations per second (IOPS) of a top-of-the-line solid state drive, provided there were at least eight requests in-flight. A scale-out simulation showed the IOPS should scale with the size of the array, provided there are no upstream bottlenecks.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "OLAP Workload"

1

Pollack, Edward. Performance Optimization for OLAP Workloads in SQL Server. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7000-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Pollack, Edward. Analytics Optimization with Columnstore Indexes in Microsoft SQL Server: Optimizing OLAP Workloads. Apress L. P., 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "OLAP Workload"

1

Bog, Anja, Kai Sachs, Alexander Zeier i Hasso Plattner. "Normalization in a Mixed OLTP and OLAP Workload Scenario". W Topics in Performance Evaluation, Measurement and Characterization, 67–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32627-1_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ailamaki, Anastasia, Erietta Liarou, Pinar Tozun, Danica Porobic i Iraklis Psaroudakis. "Scaling-up OLAP Workloads". W Databases on Modern Hardware, 56–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-01858-9_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hasse, Christof, i Gerhard Weikum. "Inter-and Intra-Transaction Parallelism for Combined OLTP/OLAP Workloads". W Advanced Transaction Models and Architectures, 279–99. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6217-7_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Park, Kyounghyun, Hai Thanh Mai, Miyoung Lee i Hee Sun Won. "A Flexible Page Storage Model for Mixed OLAP and OLTP Workloads". W Computer Science and its Applications, 765–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-45402-2_108.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Rizzi, Stefano, i Enrico Gallinucci. "CubeLoad: A Parametric Generator of Realistic OLAP Workloads". W Advanced Information Systems Engineering, 610–24. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07881-6_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chang, Xu, Yongbin Liu, Zhanpeng Mo i Li Zha. "Performance Analysis and Optimization of Alluxio with OLAP Workloads over Virtual Infrastructure". W Big Scientific Data Management, 319–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28061-1_31.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sahasranamam, Srinivasan Varadarajan, Paul Cao, Rajesh Tadakamadla i Scott Norton. "Lessons from OLTP Workload on Multi-socket HPE Integrity Superdome X System". W Performance Evaluation and Benchmarking. Traditional - Big Data - Interest of Things, 78–89. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54334-5_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Boukhobza, Jalil, Ilyes Khetib i Pierre Olivier. "Characterization of OLTP I/O Workloads for Dimensioning Embedded Write Cache for Flash Memories: A Case Study". W Model and Data Engineering, 97–109. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24443-8_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Bentayeb, Fadila, Cécile Favre i Omar Boussaid. "Dynamic Workload for Schema Evolution in Data Warehouses". W Complex Data Warehousing and Knowledge Discovery for Advanced Retrieval Development, 28–46. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-748-5.ch002.

Pełny tekst źródła
Streszczenie:
A data warehouse allows the integration of heterogeneous data sources for identified analysis purposes. The data warehouse schema is designed according to the available data sources and the users’ analysis requirements. In order to provide an answer to new individual analysis needs, the authors previously proposed, in recent work, a solution for on-line analysis personalization. They based their solution on a user-driven approach for data warehouse schema evolution which consists in creating new hierarchy levels in OLAP (on-line analytical processing) dimensions. One of the main objectives of OLAP, as the meaning of the acronym refers, is the performance during the analysis process. Since data warehouses contain a large volume of data, answering decision queries efficiently requires particular access methods. The main issue is to use redundant optimization structures such as views and indices. This implies to select an appropriate set of materialized views and indices, which minimizes total query response time, given a limited storage space. A judicious choice in this selection must be cost-driven and based on a workload which represents a set of users’ queries on the data warehouse. In this chapter, the authors address the issues related to the workload’s evolution and maintenance in data warehouse systems in response to new requirements modeling resulting from users’ personalized analysis needs. The main issue is to avoid the workload generation from scratch. Hence, they propose a workload management system which helps the administrator to maintain and adapt dynamically the workload according to changes arising on the data warehouse schema. To achieve this maintenance, the authors propose two types of workload updates: (1) maintaining existing queries consistent with respect to the new data warehouse schema and (2) creating new queries based on the new dimension hierarchy levels. Their system helps the administrator in adopting a pro-active behaviour in the management of the data warehouse performance. In order to validate their workload management system, the authors address the implementation issues of their proposed prototype. This latter has been developed within client/server architecture with a Web client interfaced with the Oracle 10g DataBase Management System.
Style APA, Harvard, Vancouver, ISO itp.
10

Pollack, Edward. "OLAP vs. OLTP - How Do They Differ". W Performance Optimization for OLAP Workloads in SQL Server. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7000-4_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "OLAP Workload"

1

Bog, Anja, Kai Sachs i Hasso Plattner. "Interactive performance monitoring of a composite OLTP and OLAP workload". W the 2012 international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2213836.2213921.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cuzzocrea, Alfredo, Rim Moussa i Enzo Mumolo. "Yet Another Automated OLAP Workload Analyzer: Principles, and Experiences". W 20th International Conference on Enterprise Information Systems. SCITEPRESS - Science and Technology Publications, 2018. http://dx.doi.org/10.5220/0006812202930298.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yoshimi, Masato, Ryu Kudo, Yasin Oge, Yuta Terada, Hidetsugu Irie i Tsutomu Yoshinaga. "Accelerating OLAP Workload on Interconnected FPGAs with Flash Storage". W 2014 Second International Symposium on Computing and Networking (CANDAR). IEEE, 2014. http://dx.doi.org/10.1109/candar.2014.87.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Haque, Waqar. "An Interactive Framework to Allocate and Manage Teaching Workload using Hybrid OLAP Cubes". W 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). IEEE, 2018. http://dx.doi.org/10.1109/iemcon.2018.8614818.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bog, Anja, Kai Sachs i Alexander Zeier. "Benchmarking database design for mixed OLTP and OLAP workloads". W Proceeding of the second joint WOSP/SIPEW international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1958746.1958806.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Bog, Anja, Mathias Domschke, Juergen Mueller i Alexander Zeier. "A framework for simulating combined OLTP and OLAP workloads". W EM). IEEE, 2009. http://dx.doi.org/10.1109/icieem.2009.5344329.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sreenivasan, Krishnamachar. "Cooling of a Many-Core Multiprocessor: Experimental Results for OLTP Workloads". W ASME 2013 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/ipack2013-73265.

Pełny tekst źródła
Streszczenie:
Stochastic heat conduction differential equation in spite of its complexity allows stationary solutions valid over a certain range of variables characterizing heat flow in multi-processor cores. Heat conduction equation is recast to account for anisotropy of a many core multi-processor in which heat generated at various locations depends on whether it is a cache, processor, bus controller, or memory controller: within the core generated heat depends on the hit rate, processor utilization, cache organization, and the technology used. Thermal conductivity of and heat generation in the core are treated as stochastic variables and influence of workloads, hitherto unrecognized, is explicitly accounted for in determining temperature distribution and its variation with processor clock frequency. Relationships derived from first principles indicate that rise in temperature with processor frequency for OLTP workload is not as catastrophic as predicted by some Industry brochures! A general framework for heat conduction in an orthotropic rectangular slab (representing a many core processor) with stochastic values of thermal conductivity and heat generation is developed; the theoretical trend is validated using published data for OLTP workloads to obtain temperature at the core surface as a function of clock frequency for the deterministic case. Transaction Processing Councils (TPC) openly available data from controlled, closely audited experiments for TPCC workloads during the period 2000–2011 were analyzed to determine the relation between throughput, clock frequency, main memory size, number of cores, and power consumed. Operating systems, compilers, linkers, processor architecture, cache, main memory, and storage sizes have changed drastically during this ten year period, not to mention hyper-threading was unknown in 2000! This analysis yields the following equations for throughput and power consumed, which for a specific case of 64 processors with a main memory of 32 GB and a million users, becomes W = 1075f0.22. For the isotropic case the temperature difference at the surface may be expressed for the case under study as ΔT = 71.1f0.22. This demonstrates that chip temperature for OLTP workloads does not increase to catastrophic values with increase in frequency. This behavior varies for other types of workloads.
Style APA, Harvard, Vancouver, ISO itp.
8

Daase, Björn, Lars Jonas Bollmeier, Lawrence Benson i Tilmann Rabl. "Maximizing Persistent Memory Bandwidth Utilization for OLAP Workloads". W SIGMOD/PODS '21: International Conference on Management of Data. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3448016.3457292.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Rupley, Annavaram, De Vale, Diep i Black. "Comparing and contrasting a commercial OLTP workload with CPU2000 on IPF". W 2002 IEEE International Workshop on Workload Characterization. IEEE, 2002. http://dx.doi.org/10.1109/wwc.2002.1226493.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Liu, Bin, Junichi Tatemura, Oliver Po, Wang-Pin Hsiung i Hakan Hacigumus. "Automatic entity-grouping for OLTP workloads". W 2014 IEEE 30th International Conference on Data Engineering (ICDE). IEEE, 2014. http://dx.doi.org/10.1109/icde.2014.6816694.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii