To see the other types of publications on this topic, follow the link: OLAP Workload.

Journal articles on the topic 'OLAP Workload'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'OLAP Workload.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Camilleri, Carl, Joseph G. Vella, and Vitezslav Nezval. "HTAP With Reactive Streaming ETL." Journal of Cases on Information Technology 23, no. 4 (October 2021): 1–19. http://dx.doi.org/10.4018/jcit.20211001.oa10.

Full text
Abstract:
In database management systems (DBMSs), query workloads can be classified as online transactional processing (OLTP) or online analytical processing (OLAP). These often run within separate DBMSs. In hybrid transactional and analytical processing (HTAP), both workloads may execute within the same DBMS. This article shows that it is possible to run separate OLTP and OLAP DBMSs, and still support timely business decisions from analytical queries running off fresh transactional data. Several setups to manage OLTP and OLAP workloads are analysed. Then, benchmarks on two industry standard DBMSs empirically show that, under an OLTP workload, a row-store DBMS sustains a 1000 times higher throughput than a columnar DBMS, whilst OLAP queries are more than 4 times faster on a columnar DBMS. Finally, a reactive streaming ETL pipeline is implemented which connects these two DBMSs. Separate benchmarks show that OLTP events can be streamed to an OLAP database within a few seconds.
APA, Harvard, Vancouver, ISO, and other styles
2

Rehrmann, Robin, Carsten Binnig, Alexander Böhm, Kihong Kim, and Wolfgang Lehner. "Sharing opportunities for OLTP workloads in different isolation levels." Proceedings of the VLDB Endowment 13, no. 10 (June 2020): 1696–708. http://dx.doi.org/10.14778/3401960.3401967.

Full text
Abstract:
OLTP applications are usually executed by a high number of clients in parallel and are typically faced with high throughput demand as well as a constraint latency requirement for individual statements. Interestingly, OLTP workloads are often read-heavy and comprise similar query patterns, which provides a potential to share work of statements belonging to different transactions. Consequently, OLAP techniques for sharing work have started to be applied also to OLTP workloads, lately. In this paper, we present an approach for merging read statements within interactively submitted multi-statement transactions consisting of reads and writes. We first define a formal framework for merging transactions running under a given isolation level and provide insights into a prototypical implementation of merging within a commercial database system. In our experimental evaluation, we show that, depending on the isolation level, the load in the system and the read-share of the workload, an improvement of the transaction throughput by up to a factor of 2.5X is possible without compromising the transactional semantics.
APA, Harvard, Vancouver, ISO, and other styles
3

Gaffney, Kevin P., Martin Prammer, Larry Brasfield, D. Richard Hipp, Dan Kennedy, and Jignesh M. Patel. "SQLite." Proceedings of the VLDB Endowment 15, no. 12 (August 2022): 3535–47. http://dx.doi.org/10.14778/3554821.3554842.

Full text
Abstract:
In the two decades following its initial release, SQLite has become the most widely deployed database engine in existence. Today, SQLite is found in nearly every smartphone, computer, web browser, television, and automobile. Several factors are likely responsible for its ubiquity, including its in-process design, standalone codebase, extensive test suite, and cross-platform file format. While it supports complex analytical queries, SQLite is primarily designed for fast online transaction processing (OLTP), employing row-oriented execution and a B-tree storage format. However, fueled by the rise of edge computing and data science, there is a growing need for efficient in-process online analytical processing (OLAP). DuckDB, a database engine nicknamed "the SQLite for analytics", has recently emerged to meet this demand. While DuckDB has shown strong performance on OLAP benchmarks, it is unclear how SQLite compares. Furthermore, we are aware of no work that attempts to identify root causes for SQLite's performance behavior on OLAP workloads. In this paper, we discuss SQLite in the context of this changing workload landscape. We describe how SQLite evolved from its humble beginnings to the full-featured database engine it is today. We evaluate the performance of modern SQLite on three benchmarks, each representing a different flavor of in-process data management, including transactional, analytical, and blob processing. We delve into analytical data processing on SQLite, identifying key bottlenecks and weighing potential solutions. As a result of our optimizations, SQLite is now up to 4.2X faster on SSB. Finally, we discuss the future of SQLite, envisioning how it will evolve to meet new demands and challenges.
APA, Harvard, Vancouver, ISO, and other styles
4

Rodríguez-Mazahua, Nidia, Lisbeth Rodríguez-Mazahua, Asdrúbal López-Chau, Giner Alor-Hernández, and Isaac Machorro-Cano. "Decision-Tree-Based Horizontal Fragmentation Method for Data Warehouses." Applied Sciences 12, no. 21 (October 28, 2022): 10942. http://dx.doi.org/10.3390/app122110942.

Full text
Abstract:
Data warehousing gives frameworks and means for enterprise administrators to methodically prepare, comprehend, and utilize the data to improve strategic decision-making skills. One of the principal challenges to data warehouse designers is fragmentation. Currently, several fragmentation approaches for data warehouses have been developed since this technique can decrease the OLAP (online analytical processing) query response time and it provides considerable benefits in table loading and maintenance tasks. In this paper, a horizontal fragmentation method, called FTree, that uses decision trees to fragment data warehouses is presented to take advantage of the effectiveness that this technique provides in classification. FTree determines the OLAP queries with major relevance, evaluates the predicates found in the workload, and according to this, builds the decision tree to select the horizontal fragmentation scheme. To verify that the design is correct, the SSB (star schema benchmark) was used in the first instance; later, a tourist data warehouse was built, and the fragmentation method was tested on it. The results of the experiments proved the efficacy of the method.
APA, Harvard, Vancouver, ISO, and other styles
5

Szárnyas, Gábor, Jack Waudby, Benjamin A. Steer, Dávid Szakállas, Altan Birler, Mingxi Wu, Yuchen Zhang, and Peter Boncz. "The LDBC Social Network Benchmark." Proceedings of the VLDB Endowment 16, no. 4 (December 2022): 877–90. http://dx.doi.org/10.14778/3574245.3574270.

Full text
Abstract:
The Social Network Benchmark's Business Intelligence workload (SNB BI) is a comprehensive graph OLAP benchmark targeting analytical data systems capable of supporting graph workloads. This paper marks the finalization of almost a decade of research in academia and industry via the Linked Data Benchmark Council (LDBC). SNB BI advances the state-of-the art in synthetic and scalable analytical database benchmarks in many aspects. Its base is a sophisticated data generator, implemented on a scalable distributed infrastructure, that produces a social graph with small-world phenomena, whose value properties follow skewed and correlated distributions and where values correlate with structure. This is a temporal graph where all nodes and edges follow lifespan-based rules with temporal skew enabling realistic and consistent temporal inserts and (recursive) deletes. The query workload exploiting this skew and correlation is based on LDBC's "choke point"-driven design methodology and will entice technical and scientific improvements in future (graph) database systems. SNB BI includes the first adoption of "parameter curation" in an analytical benchmark, a technique that ensures stable runtimes of query variants across different parameter values. Two performance metrics characterize peak single-query performance (power) and sustained concurrent query throughput. To demonstrate the portability of the benchmark, we present experimental results on a relational and a graph DBMS. Note that these do not constitute an official LDBC Benchmark Result - only audited results can use this trademarked term.
APA, Harvard, Vancouver, ISO, and other styles
6

Pisano, Valentina Indelli, Michele Risi, and Genoveffa Tortora. "How reduce the View Selection Problem through the CoDe Modeling." Journal on Advances in Theoretical and Applied Informatics 2, no. 2 (December 21, 2016): 19. http://dx.doi.org/10.26729/jadi.v2i2.2090.

Full text
Abstract:
Big Data visualization is not an easy task due to the sheer amount of information contained in data warehouses. Then the accuracy on data relationships in a representation becomes one of the most crucial aspects to perform business knowledge discovery. A tool that allows to model and visualize information relationships between data is CoDe, which by processing several queries on a data-mart, generates a visualization of such data. However on a large data warehouse, the computation of these queries increases the response time by the query complexity. A common approach to speed up data warehousing is precompute a set of materialized views, store in the warehouse and use them to compute the workload queries. The goal and the objectives of this paper are to present a new process exploiting the CoDe modeling through determining the minimal number of required OLAP queries and to mitigate the problem of view selection, i.e., select the optimal set of materialized views. In particular, the proposed process determines the minimal number of required OLAP queries, creates an ad hoc lattice structure to represent them, and selects on such structure the views to be materialized taking into account an heuristic based on the processing time cost and the view storage space. The results of an experiment on a real data warehouse show an improvement in the range of 36-98% with respect the approach that does not consider materialized views, and 7% wrt. an approach that exploits them. Moreover, we have shown how the results are affected by the lattice structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Nurieva, L. M., and S. G. Kiselev. "Workload and salary as determinants of pedagogical graduates’ employment by occupation." Education and science journal 23, no. 10 (December 15, 2021): 100–128. http://dx.doi.org/10.17853/1994-5639-2021-10-100-128.

Full text
Abstract:
Introduction. The problem of the effectiveness of the pedagogical education system remains one of the most discussed topics in modern professional discourse. A large proportion of leaders of the educational industry and representatives of the expert community are convinced that graduates of pedagogical universities and colleges are not sufficiently prepared for independent professional activity and deflect from work in their field. The refusal of graduates to be employed by occupation is associated mainly with the poor quality of students’ practical training in universities. Criticising the low attendance rates of young professionals in schools, analysts often ignore the conditions of employment in educational institutions. Meanwhile, the appearance of a new form of statistical observation of OO-1 and OO-2 on open access makes it possible not only to track the results of employment of graduates of the vocational education system in schools across the country, but also to compare them with the conditions for hiring: the current workload on staff and the level of teachers’ salaries.The aim of the present research was to find the dependence of the results of pedagogical graduates’ employment by occupation on the conditions of employment in schools: the level of teachers’ salaries and workload in the regions of Russia.Methodology and research methods. The research methodological framework is a structural approach based on applied research procedures (observation, description, comparison, counting, measurement, modelling, etc.), according to which general scientific (comparative, retrospective analysis, systematisation, generalisation) and statistical research methods (statistical and correlation analysis, etc.) were employed. The analysis of official documents of educational authorities of different levels and educational institutions, scientific publications and forms of federal statistical observation of OO-1 and OO-2 was conducted. The processing of regional educational statistics was carried out using the Online Analytical Processing technology, which makes it possible to obtain OLAP cubes and form analytical slices in accordance with emerging research tasks. The slices were studied in detail using the Analysis ToolPak addin procedures and the statistical functions of the Excel library. In the course of the analysis, the authors performed the calculation of measures of the statistical relationship between the studied variables and their graphic visualisation.Results. This study established a high level of pedagogical labour market segmentation by territorial and qualification-age criteria of employees. The effects of influence on the part of state and public institutions and practices, leading to systematic discrimination of certain groups of educators, were revealed. Regional data provide the examples of discrimination against young teachers in remuneration, both in the process of employment due to the lack of qualification grades, and in the process of work, as a consequence of the inaccessibility to payments from the incentive part of the salary fund of schools. It is shown that the improvement of the results of pedagogical graduates’ employment by occupation is related to the improvement of the wage system, increasing the base rates and reducing the intra-industry differentiation of earnings among workers of different ages.Practical significance. The authors are convinced that this article will clarify the approaches for adjusting the mechanism for distributing the wages fund of schools, develop measures to attract young people to teaching and ensure the elimination of the shortage of personnel in the education system.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jianying, Tongliang Li, Haoze Song, Xinjun Yang, Wenchao Zhou, Feifei Li, Baoyue Yan, et al. "PolarDB-IMCI: A Cloud-Native HTAP Database System at Alibaba." Proceedings of the ACM on Management of Data 1, no. 2 (June 13, 2023): 1–25. http://dx.doi.org/10.1145/3589785.

Full text
Abstract:
Cloud-native databases have become the de-facto choice for mission-critical applications on the cloud due to the need for high availability, resource elasticity, and cost efficiency. Meanwhile, driven by the increasing connectivity between data generation and analysis, users prefer a single database to efficiently process both OLTP and OLAP workloads, which enhances data freshness and reduces the complexity of data synchronization and the overall business cost. In this paper, we summarize five crucial design goals for a cloud-native HTAP database based on our experience and customers' feedback, i.e., transparency, competitive OLAP performance, minimal perturbation on OLTP workloads, high data freshness, and excellent resource elasticity. As our solution to realize these goals, we present PolarDB-IMCI, a cloud-native HTAP database system designed and deployed at Alibaba Cloud. Our evaluation results show that PolarDB-IMCI is able to handle HTAP efficiently on both experimental and production workloads; notably, it speeds up analytical queries up to ×149 on TPC-H (100GB). PolarDB-IMCI introduces low visibility delay and little performance perturbation on OLTP workloads (<5%), and resource elasticity can be achieved by scaling out in tens of seconds.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Juchang, SeungHyun Moon, Kyu Hwan Kim, Deok Hoe Kim, Sang Kyun Cha, and Wook-Shin Han. "Parallel replication across formats in SAP HANA for scaling out mixed OLTP/OLAP workloads." Proceedings of the VLDB Endowment 10, no. 12 (August 2017): 1598–609. http://dx.doi.org/10.14778/3137765.3137767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Juchang, Wook-Shin Han, Hyoung Jun Na, Chang Gyoo Park, Kyu Hwan Kim, Deok Hoe Kim, Joo Yeon Lee, Sang Kyun Cha, and SeungHyun Moon. "Parallel replication across formats for scaling out mixed OLTP/OLAP workloads in main-memory databases." VLDB Journal 27, no. 3 (April 16, 2018): 421–44. http://dx.doi.org/10.1007/s00778-018-0503-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kunkel, S., B. Armstrong, and P. Vitale. "System optimization for OLTP workloads." IEEE Micro 19, no. 3 (1999): 56–64. http://dx.doi.org/10.1109/40.768504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Meena, Rajesh. "A Performance Evaluation of OLTP Workloads on Flash-Based SSDs." International Journal of Computer and Communication Engineering 3, no. 2 (2014): 116–19. http://dx.doi.org/10.7763/ijcce.2014.v3.303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tsuei, Thin-Fong, Allan N. Packer, and Keng-Tai Ko. "Database buffer size investigation for OLTP workloads." ACM SIGMOD Record 26, no. 2 (June 1997): 112–22. http://dx.doi.org/10.1145/253262.253279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yansong, Yu Zhang, Jiaheng Lu, Shan Wang, Zhuan Liu, and Ruichen Han. "One size does not fit all: accelerating OLAP workloads with GPUs." Distributed and Parallel Databases 38, no. 4 (July 31, 2020): 995–1037. http://dx.doi.org/10.1007/s10619-020-07304-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bhunje, Anagha, and Swati Ahirrao. "Workload aware incremental repartitioning of NoSQL for OLTP applications." International Journal of Internet of Things and Cyber-Assurance 1, no. 3/4 (2020): 214. http://dx.doi.org/10.1504/ijitca.2020.112511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bhunje, Anagha, and Swati Ahirrao. "Workload aware incremental repartitioning of NoSQL for OLTP applications." International Journal of Internet of Things and Cyber-Assurance 1, no. 3/4 (2020): 214. http://dx.doi.org/10.1504/ijitca.2020.10034661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

LEE, Ki-Hoon. "Performance Improvement of Database Compression for OLTP Workloads." IEICE Transactions on Information and Systems E97.D, no. 4 (2014): 976–80. http://dx.doi.org/10.1587/transinf.e97.d.976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chung, Yongwha, Hiecheol Kim, Jin-Won Park, and Kangwoo Lee. "Performance evaluation for CC-NUMA multiprocessors using an OLTP workload." Microprocessors and Microsystems 25, no. 4 (June 2001): 221–29. http://dx.doi.org/10.1016/s0141-9331(01)00115-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Elnaffar, Said, Pat Martin, Berni Schiefer, and Sam Lightstone. "Is it DSS or OLTP: automatically identifying DBMS workloads." Journal of Intelligent Information Systems 30, no. 3 (February 14, 2007): 249–71. http://dx.doi.org/10.1007/s10844-006-0036-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shaikhha, Amir, Maximilian Schleich, and Dan Olteanu. "An intermediate representation for hybrid database and machine learning workloads." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2831–34. http://dx.doi.org/10.14778/3476311.3476356.

Full text
Abstract:
IFAQ is an intermediate representation and compilation framework for hybrid database and machine learning workloads expressible using iterative programs with functional aggregate queries. We demonstrate IFAQ for several OLAP queries, linear algebra expressions, and learning factorization machines over training datasets defined by feature extraction queries over relational databases.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Yansong, Yu Zhang, Xuan Zhou, and Jiaheng Lu. "Main-memory foreign key joins on advanced processors: design and re-evaluations for OLAP workloads." Distributed and Parallel Databases 37, no. 4 (May 23, 2018): 469–506. http://dx.doi.org/10.1007/s10619-018-7226-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kamal, Joarder, Manzur Murshed, and Rajkumar Buyya. "Workload-aware incremental repartitioning of shared-nothing distributed databases for scalable OLTP applications." Future Generation Computer Systems 56 (March 2016): 421–35. http://dx.doi.org/10.1016/j.future.2015.09.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Keeton, Kimberly, David A. Patterson, Yong Qiang He, Roger C. Raphael, and Walter E. Baker. "Performance characterization of a Quad Pentium Pro SMP using OLTP workloads." ACM SIGARCH Computer Architecture News 26, no. 3 (June 1998): 15–26. http://dx.doi.org/10.1145/279361.279364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bhunje, Anagha, and Swati Ahirrao. "Workload Aware Incremental Repartitioning of NoSQL for Online Transactional Processing Applications." International Journal of Advances in Applied Sciences 7, no. 1 (March 1, 2018): 54. http://dx.doi.org/10.11591/ijaas.v7.i1.pp54-65.

Full text
Abstract:
<p><span lang="EN-US">Numerous applications are deployed on the web with the increasing popularity of internet. The applications include, 1) Banking applications,<br /> 2) Gaming applications, 3) E-commerce web applications. Different applications reply on OLTP (Online Transaction Processing) systems. OLTP systems need to be scalable and require fast response. Today modern web applications generate huge amount of the data which one particular machine and Relational databases cannot handle. The E-Commerce applications are facing the challenge of improving the scalability of the system. Data partitioning technique is used to improve the scalability of the system. The data is distributed among the different machines which results in increasing number of transactions. The work-load aware incremental repartitioning approach is used to balance the load among the partitions and to reduce the number of transactions that are distributed in nature. Hyper Graph Representation technique is used to represent the entire transactional workload in graph form. In this technique, frequently used items are collected and Grouped by using Fuzzy C-means Clustering Algorithm. Tuple Classification and Migration Algorithm is used for mapping clusters to partitions and after that tuples are migrated efficiently.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
25

Suryana, N., M. S. Rohman, and F. S. Utomo. "PREDICTION BASED WORKLOAD PERFORMANCE EVALUATION FOR DISASTER MANAGEMENT SPATIAL DATABASE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W10 (September 12, 2018): 187–92. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w10-187-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> This paper discusses a prediction based workload performance evaluation implementation during Disaster Management, especially at the response phase, to handle large spatial data in the event of an eruption of the Merapi volcano in Indonesia. Complexity associated with a large spatial database are not the same with the conventional database. This implies that in coming complex work loads are difficult to be handled by human from which needs longer processing time and may lead to failure and undernourishment. Based on incoming workload, this study is intended to predict the associated workload into OLTP and DSS workload performance types. From the SQL statements, it is clear that the DBMS can obtain and record the process, measure the analysed performances and the workload classifier in the form of DBMS snapshots. The Case-Based Reasoning (CBR) optimised with Hash Search Technique has been adopted in this study to evaluate and predict the workload performance of PostgreSQL. It has been proven that the proposed CBR using Hash Search technique has resulted in acceptable prediction of the accuracy measurement than other machine learning algorithm like Neural Network and Support Vector Machine. Besides, the results of the evaluation using confusion matrix has resulted in very good accuracy as well as improvement in execution time. Additionally, the results of the study indicated that the prediction model for workload performance evaluation using CBR which is optimised by Hash Search technique for determining workload data on shortest path analysis via the employment of Dijkstra algorithm. It could be useful for the prediction of the incoming workload based on the status of the predetermined DBMS parameters. In this way, information is delivered to DBMS hence ensuring incoming workload information that is very crucial to determine the smooth works of PostgreSQL.</p>
APA, Harvard, Vancouver, ISO, and other styles
26

Xudong, Zhu, Yin Yang, Liu Zhenjun, and Shao Fang. "C-Aware: A Cache Management Algorithm Considering Cache Media Access Characteristic in Cloud Computing." Mathematical Problems in Engineering 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/867167.

Full text
Abstract:
Data congestion and network delay are the important factors that affect performance of cloud computing systems. Using local disk of computing nodes as a cache can sometimes get better performance than accessing data through the network. This paper presents a storage cache placement algorithm—C-Aware, which traces history access information of cache and data source, adaptively decides whether to cache data according to cache media characteristic and current access environment, and achieves good performance under different workload on storage server. We implement this algorithm in both simulated and real environments. Our simulation results using OLTP and WebSearch traces demonstrate that C-Aware achieves better adaptability to the changes of server workload. Our benchmark results in real system show that, in the scenario where the size of local cache is half of data set, C-Aware gets nearly 80% improvement compared with traditional methods when the server is not busy and still presents comparable performance when there is high workload on server side.
APA, Harvard, Vancouver, ISO, and other styles
27

FAYGANOĞLU, Pınar, Rukiye CAN YALÇIN, and Memduh BEĞENİRBAŞ. "THE EFFECT OF ROLE AMBIGUITY ON ALIENATION FROM WORK: MEDIATOR ROLE OF WORKLOAD." Gaziantep University Journal of Social Sciences 21, no. 4 (October 19, 2022): 2239–57. http://dx.doi.org/10.21547/jss.1110883.

Full text
Abstract:
Bu çalışmanın amacı, çalışanların rol belirsizliği ile işe yabancılaşma algıları arasındaki ilişkide iş yükü fazlalığının düzenleyici bir rol oynayıp oynamadığının ortaya koyulmasıdır. Bu amaçla, Ankara’da faaliyet gösteren üniversite, banka, sağlık kuruluşları ile güvenlik hizmeti üreten farklı kurumlarda çalışan, 312 kişiden anket yoluyla elde edilen veriler analiz edilmiştir. Analizlerde ilk olarak çalışma kapsamında kullanılan ölçeklere AMOS kullanılarak doğrulayıcı faktör analizi yapılmış olup, ardından değişkenler arası korelasyon analizi ve çalışma bağlamında ortaya koyulan hipotezleri test etmek amacıyla hiyerarşik regresyon analizi yapılmış ve iş yükü fazlalığının düzenleyicilik rolü oynayıp oynamadığını ortaya koyabilmek adına SPSS Process v3.5 Macro yazılımı ile Bootstrap yöntemini esas alan regresyon analizi yapılmıştır. Bulgular neticesinde, çalışanların rol belirsizliği algılarının işe yabancılaşma düzeyini anlamlı ve pozitif olarak etkilediği ve iş yükü fazlalığı algısının rol belirsizliği ile işe yabancılaşma arasındaki ilişkide düzenleyici bir etkiye sahip olduğu ortaya koyulmuştur.
APA, Harvard, Vancouver, ISO, and other styles
28

Zheng, Nan, and Zachary G. Ives. "Compact, tamper-resistant archival of fine-grained provenance." Proceedings of the VLDB Endowment 14, no. 4 (December 2020): 485–97. http://dx.doi.org/10.14778/3436905.3436909.

Full text
Abstract:
Data provenance tools aim to facilitate reproducible data science and auditable data analyses, by tracking the processes and inputs responsible for each result of an analysis. Fine-grained provenance further enables sophisticated reasoning about why individual output results appear or fail to appear. However, for reproducibility and auditing, we need a provenance archival system that is tamper-resistant , and efficiently stores provenance for computations computed over time (i.e., it compresses repeated results). We study this problem, developing solutions for storing fine-grained provenance in relational storage systems while both compressing and protecting it via cryptographic hashes. We experimentally validate our proposed solutions using both scientific and OLAP workloads.
APA, Harvard, Vancouver, ISO, and other styles
29

Appuswamy, Raja, Angelos C. Anadiotis, Danica Porobic, Mustafa K. Iman, and Anastasia Ailamaki. "Analyzing the impact of system architecture on the scalability of OLTP engines for high-contention workloads." Proceedings of the VLDB Endowment 11, no. 2 (October 2017): 121–34. http://dx.doi.org/10.14778/3149193.3149194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Appuswamy, Raja, Angelos C. Anadiotis, Danica Porobic, Mustafa K. Iman, and Anastasia Ailamaki. "Analyzing the impact of system architecture on the scalability of OLTP engines for high-contention workloads." Proceedings of the VLDB Endowment 11, no. 2 (October 2017): 121–34. http://dx.doi.org/10.14778/3167892.3167893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Anjum, Asma, and Asma Parveen. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (July 1, 2023): 276. http://dx.doi.org/10.11591/ijres.v12.i2.pp276-286.

Full text
Abstract:
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scientific area. Hence, in this research work, we design and develop an optimized load balancing in parallel computation aka optimal load balancing in parallel computing (OLBP) mechanism to distribute the load; at first different parameter in workload is computed and then loads are distributed. Further OLBP mechanism considers makespan time and energy as constraint and further task offloading is done considering the server speed. This phenomenon provides the balancing of workflow; further OLBP mechanism is evaluated using cyber shake workflow dataset and outperforms the existing workflow mechanism.
APA, Harvard, Vancouver, ISO, and other styles
32

Sirin, Utku, Pınar Tözün, Danica Porobic, Ahmad Yasin, and Anastasia Ailamaki. "Micro-architectural analysis of in-memory OLTP: Revisited." VLDB Journal 30, no. 4 (March 31, 2021): 641–65. http://dx.doi.org/10.1007/s00778-021-00663-8.

Full text
Abstract:
AbstractMicro-architectural behavior of traditional disk-based online transaction processing (OLTP) systems has been investigated extensively over the past couple of decades. Results show that traditional OLTP systems mostly under-utilize the available micro-architectural resources. In-memory OLTP systems, on the other hand, process all the data in main-memory and, therefore, can omit the buffer pool. Furthermore, they usually adopt more lightweight concurrency control mechanisms, cache-conscious data structures, and cleaner codebases since they are usually designed from scratch. Hence, we expect significant differences in micro-architectural behavior when running OLTP on platforms optimized for in-memory processing as opposed to disk-based database systems. In particular, we expect that in-memory systems exploit micro-architectural features such as instruction and data caches significantly better than disk-based systems. This paper sheds light on the micro-architectural behavior of in-memory database systems by analyzing and contrasting it to the behavior of disk-based systems when running OLTP workloads. The results show that, despite all the design changes, in-memory OLTP exhibits very similar micro-architectural behavior to disk-based OLTP: more than half of the execution time goes to memory stalls where instruction cache misses or the long-latency data misses from the last-level cache (LLC) are the dominant factors in the overall execution time. Even though ground-up designed in-memory systems can eliminate the instruction cache misses, the reduction in instruction stalls amplifies the impact of LLC data misses. As a result, only 30% of the CPU cycles are used to retire instructions, and 70% of the CPU cycles are wasted to stalls for both traditional disk-based and new generation in-memory OLTP.
APA, Harvard, Vancouver, ISO, and other styles
33

An, Mijin, Jonghyeok Park, Tianzheng Wang, Beomseok Nam, and Sang-Won Lee. "NV-SQL: Boosting OLTP Performance with Non-Volatile DIMMs." Proceedings of the VLDB Endowment 16, no. 6 (February 2023): 1453–65. http://dx.doi.org/10.14778/3583140.3583159.

Full text
Abstract:
When running OLTP workloads, relational DBMSs with flash SSDs still suffer from the durability overhead. Heavy writes to SSD not only limit the performance but also shorten the storage lifespan. To mitigate the durability overhead, this paper proposes a new database architecture, NV-SQL. NV-SQL aims at absorbing a large fraction of writes written from DRAM to SSD by introducing NVDIMM into the memory hierarchy as a durable write cache. On the new architecture, NV-SQL makes two technical contributions. First, it proposes the re-update interval-based admission policy that determines which write-hot pages qualify for being cached in NVDIMM. It is novel in that the page hotness is based solely on pages' LSN. Second, this study finds that NVDIMM-resident pages can violate the page action consistency upon crash and proposes how to detect inconsistent pages using per-page in-update flag and how to rectify them using the redo log. NV-SQL demonstrates how the ARIES-like logging and recovery techniques can be elegantly extended to support the caching and recovery for NVDIMM data. Additionally, by placing write-intensive redo buffer and DWB in NVDIMM, NV-SQL eliminates the log-force-at-commit and WAL protocols and further halves the writes to the storage. Our NV-SQL prototype running with a real NVDIMM device outperforms the same-priced vanilla MySQL with larger DRAM by several folds in terms of transaction throughput for write-intensive OLTP benchmarks. This confirms that NV-SQL is a cost-performance efficient solution to the durability problem.
APA, Harvard, Vancouver, ISO, and other styles
34

Yan, Baoyue, Xuntao Cheng, Bo Jiang, Shibin Chen, Canfang Shang, Jianying Wang, Gui Huang, Xinjun Yang, Wei Cao, and Feifei Li. "Revisiting the design of LSM-tree Based OLTP storage engine with persistent memory." Proceedings of the VLDB Endowment 14, no. 10 (June 2021): 1872–85. http://dx.doi.org/10.14778/3467861.3467875.

Full text
Abstract:
The recent byte-addressable and large-capacity commercialized persistent memory (PM) is promising to drive database as a service (DBaaS) into unchartered territories. This paper investigates how to leverage PMs to revisit the conventional LSM-tree based OLTP storage engines designed for DRAM-SSD hierarchy for DBaaS instances. Specifically we (1) propose a light-weight PM allocator named Hal-loc customized for LSM-tree, (2) build a high-performance Semi-persistent Memtable utilizing the persistent in-memory writes of PM, (3) design a concurrent commit algorithm named Reorder Ring to aschieve log-free transaction processing for OLTP workloads and (4) present a Global Index as the new globally sorted persistent level with non-blocking in-memory compaction. The design of Reorder Ring and Semi-persistent Memtable achieves fast writes without synchronized logging overheads and achieves near instant recovery time. Moreover, the design of Semi-persistent Memtable and Global Index with in-memory compaction enables the byte-addressable persistent levels in PM, which significantly reduces the read and write amplification as well as the background compaction overheads. The overall evaluation shows that the performance of our proposal over PM-SSD hierarchy outperforms the baseline by up to 3.8x in YCSB benchmark and by 2x in TPC-C benchmark.
APA, Harvard, Vancouver, ISO, and other styles
35

Bağcı, Halit. "Eğitim Kurumlarında Görevli Öğretmenlerin Stres Yaşamalarına Neden Olan Faktörlerin Veri Analizinin İncelenmesi." Journal of Social Research and Behavioral Sciences 9, no. 19 (June 25, 2023): 164–74. http://dx.doi.org/10.52096/jsrbs.9.19.12.

Full text
Abstract:
The aim of this research is to investigate the data analysis of the factors that cause teachers working in educational institutions to experience stress. Teachers may experience professional burnout due to their educational and educational understandings, which may change in parallel with various social, economic and technological developments. When the literature is examined, it seems that the most important basic factors affecting teacher stress are personal, environmental and professional. Teachers, student-teacher and school-family conflicts in educational and training services, negativity in their relationships with colleagues, disciplinary problems, inadequacy in physical conditions, a lot of bureaucratic work, criticism of society, crowded classrooms, developmental delays in students, social and political pressures on educational institutions, rapid changes in the curriculum, workload overload, insufficient salary, lack of administrative support, whether they receive appreciation from administrators and inspectors, the belief that the teacher has lost control in the classroom and the inability to participate in management-related decisions, the teacher's lack of perceptions about their professional competence, the inability to perceive success, low levels of colleague and management support, role conflict and role uncertainty, the student's insensitivity, apathy, behavior that distorts and damages the environment may experience occupational stresses due to intense problems such as. Within this framework, it is aimed to identify the sources of institutional stress of a total of 54 teachers working in educational institutions affiliated to the Ministry of National Education located in Istanbul and to determine whether this situation differs according to various variables. The universe of the research consists of teachers working in educational institutions affiliated to the Ministry of National Education located in Istanbul. In the research, the data obtained were collected by means of scale forms. In the first part of the scale table, there is individual information, and in the second part, there is a rubric about the stress coping scale. In the evaluation of data, descriptive statistical methods, as well as SPSS for statistical analysis of scaling data.22.0 examination program, frequency, percentage (%), t test and one-sided ANOVA were used. As a result of the analysis, it was determined that the institutional stress levels of the participating teachers were slightly higher than the general average, unlike the results obtained from the studies conducted in the literature. Key Words: Educational Institutions, Teachers, Stress, Data Analysis
APA, Harvard, Vancouver, ISO, and other styles
36

Baruah, Nirvik, Peter Kraft, Fiodar Kazhamiaka, Peter Bailis, and Matei Zaharia. "Parallelism-Optimizing Data Placement for Faster Data-Parallel Computations." Proceedings of the VLDB Endowment 16, no. 4 (December 2022): 760–71. http://dx.doi.org/10.14778/3574245.3574260.

Full text
Abstract:
Systems performing large data-parallel computations, including online analytical processing (OLAP) systems like Druid and search engines like Elasticsearch, are increasingly being used for business-critical real-time applications where providing low query latency is paramount. In this paper, we investigate an underexplored factor in the performance of data-parallel queries: their parallelism. We find that to minimize the tail latency of data-parallel queries, it is critical to place data such that the data items accessed by each individual query are spread across as many machines as possible so that each query can leverage the computational resources of as many machines as possible. To optimize parallelism and minimize tail latency in real systems, we develop a novel parallelism-optimizing data placement algorithm that defines a linearly-computable measure of query parallelism, uses it to frame data placement as an optimization problem, and leverages a new optimization problem partitioning technique to scale to large cluster sizes. We apply this algorithm to popular systems such as Solr and MongoDB and show that it reduces p99 latency by 7-64% on data-parallel workloads.
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmadi, Mohammad Reza. "Performance Evaluation of Virtualization Techniques for Control and Access of Storage Systems in Data Center Applications." Journal of Electrical Engineering 64, no. 5 (September 1, 2013): 272–82. http://dx.doi.org/10.2478/jee-2013-0040.

Full text
Abstract:
Abstract Virtualization is a new technology that creates virtual environments based on the existing physical resources. This article evaluates effect of virtualization techniques on control servers and access method in storage systems [1, 2]. In control server virtualization, we have presented a tile based evaluation based on heterogeneous workloads to compare several key parameters and demonstrate effectiveness of virtualization techniques. Moreover, we have evaluated the virtualized model using VMotion techniques and maximum consolidation. In access method, we have prepared three different scenarios using direct, semi-virtual, and virtual attachment models. We have evaluated the proposed models with several workloads including OLTP database, data streaming, file server, web server, etc. Results of evaluation for different criteria confirm that server virtualization technique has high throughput and CPU usage as well as good performance with noticeable agility. Also virtual technique is a successful alternative for accessing to the storage systems especially in large capacity systems. This technique can therefore be an effective solution for expansion of storage area and reduction of access time. Results of different evaluation and measurements demonstrate that the virtualization in control server and full virtual access provide better performance and more agility as well as more utilization in the systems and improve business continuity plan.
APA, Harvard, Vancouver, ISO, and other styles
38

Mehta, Mayuri A., and Devesh C. Jinwala. "A Hybrid Dynamic Load Balancing Algorithm for Distributed Systems Using Genetic Algorithms." International Journal of Distributed Systems and Technologies 5, no. 3 (July 2014): 1–23. http://dx.doi.org/10.4018/ijdst.2014070101.

Full text
Abstract:
Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Junru, Youyou Lu, Yiming Zhang, Qing Wang, Zhuo Cheng, Keji Huang, and Jiwu Shu. "SwitchTx." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2881–94. http://dx.doi.org/10.14778/3551793.3551838.

Full text
Abstract:
Online-transaction-processing (OLTP) applications require the underlying storage system to guarantee consistency and serializability for distributed transactions involving large numbers of servers, which tends to introduce high coordination cost and cause low system performance. In-network coordination is a promising approach to alleviate this problem, which leverages programmable switches to move a piece of coordination functionality into the network. This paper presents a fast and scalable transaction processing system called SwitchTx. At the core of SwitchTx is a decentralized multi-switch in-network coordination mechanism, which leverages modern switches' programmability to reduce coordination cost while avoiding the central-switch-caused problems in the state-of-the-art Eris transaction processing system. SwitchTx abstracts various coordination tasks (e.g., locking, validating, and replicating) as in-switch gather-and-scatter (GaS) operations, and offloads coordination to a tree of switches for each transaction (instead of to a central switch for all transactions) where the client and the participants connect to the leaves. Moreover, to control the transaction traffic intelligently, SwitchTx reorders the coordination messages according to their semantics and redesigns the congestion control combined with admission control. Evaluation shows that SwitchTx outperforms current transaction processing systems in various workloads by up to 2.16X in throughput, 40.4% in latency, and 41.5% in lock time.
APA, Harvard, Vancouver, ISO, and other styles
40

Gönenç, İlknur Münevver, and Nazan Çakırer Çalbayram. "Contributions of pregnancy school program, opinions of women on the education and their post-education experiencesGebelerin, gebe okulu programı hakkındaki görüşleri ve eğitim sonrası deneyimleri." Journal of Human Sciences 14, no. 2 (May 9, 2017): 1609. http://dx.doi.org/10.14687/jhs.v14i2.4424.

Full text
Abstract:
The purpose of finding out the opinions of pregnancy on the education given in their antenatal periods, their post-education experiences and contributions. This research was conducted retrospectively. The study was completed with 40 pregnant women who took part in the pregnancy school program and met the study criteria. The research data were collected using a data collection form developed by the investigators. 95% of the women stated that the education that they received was helpful during their pregnancy and 72.5% stated in the process of delivery and all of them stated during their postpartum periods. 52.5% of the woman stated that they lived labour fear and all of them explained they defeated fear using education given by pregnancy school program. 70% woman found the education satisfactory, the others proposed that some topics should be detailed especially coping accidents during pregnancy and also wished husbands should participate this education. We found in the research that the pregnancy school program had contributions during pregnancy and delivery and in the postpartum period and we think that it has an important role in solving the obstetric problems. Pregnancy school program will increase the women’s adaptation to pregnancy and decrease the workload of healthcare professionals. ÖzetAraştırma, gebelerin antenatal dönemde verilen eğitim hakkındaki görüşlerini, eğitim sonrası deneyimlerini ve katkılarını belirlemek amacıyla yapılmıştır. Retrospektif olarak yürütülen araştırma gebe okulu programına katılan ve çalışma kriterlerine uyan 40 gebe ile tamamlanmıştır. Araştırmanın verileri araştırmacılar tarafından geliştirilen veri toplama formu ile toplanmıştır. Kadınların %95’i gebe okulunda aldıkları eğitimin gebelik döneminde, %72.5’i ise doğum sürecinde fayda sağladığını ifade etmiştir. Kadınların %52.5’i gebeliğinde doğum korkusu yaşadığını, korku yaşadığını ifade edenlerin tamamına yakını ise gebe okulundan aldığı bilgilerle bu korkuları ile baş edebildiğini belirtmiştir. Kadınların %70’i eğitimi yeterli bulurken, diğerleri ise bazı konuların daha detaylı verilmesini, gebelikte görülen kazalar ve baş etme konusunun programa eklemesini, eşlerinde bu eğitime katılmalarını önermişlerdir. Araştırmada gebe okulu programının; gebelik, doğum ve doğum sonrası dönemde önemli katkıları olduğu belirlenmiş olup obstetrik problemlerin çözümünde önemli bir yere sahip olduğu düşünülmektedir. Gebe okulu programı kadınların gebeliğe adaptasyonunu arttıracak ve sağlık bakım profesyonellerinin iş yükünü azaltacaktır.
APA, Harvard, Vancouver, ISO, and other styles
41

Pandis, Ippokratis. "The evolution of Amazon redshift." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3162–74. http://dx.doi.org/10.14778/3476311.3476391.

Full text
Abstract:
In 2013, Amazon Web Services revolutionized the data warehousing industry by launching Amazon Redshift [7], the first fully managed, petabyte-scale enterprise-grade cloud data warehouse. Amazon Redshift made it simple and cost-effective to efficiently analyze large volumes of data using existing business intelligence tools. This launch was a significant leap from the traditional on-premise data warehousing solutions, which were expensive, not elastic, and required significant expertise to tune and operate. Customers embraced Amazon Redshift and it became the fastest growing service in AWS. Today, tens of thousands of customers use Amazon Redshift in AWS's global infrastructure of 25 launched Regions and 81 Availability Zones (AZs), to process exabytes of data daily. The success of Amazon Redshift inspired a lot of innovation in the analytics segment, e.g. [1, 2, 4, 10], which in turn has benefited customers. In the last few years, the use cases for Amazon Redshift have evolved and in response, Amazon Redshift continues to deliver a series of innovations that delight customers. In this paper, we give an overview of Amazon Redshift's system architecture. Amazon Redshift is a columnar MPP data warehouse [7]. As shown in Figure 1, an Amazon Redshift compute cluster consists of a coordinator node, called the leader node , and multiple compute nodes . Data is stored on Redshift Managed Storage , backed by Amazon S3, and cached in compute nodes on locally-attached SSDs in compressed columnar fashion. Tables are either replicated on every compute node or partitioned into multiple buckets that are distributed among all compute nodes. AQUA is a query acceleration layer that leverages FPGAs to improve performance. CaaS is a caching microservice of optimized generated code for the various query fragments executed in the Amazon Redshift fleet. The innovation at Amazon Redshift continues at accelerated pace. Its development is centered around four streams. First, Amazon Redshift strives to provide industry-leading data warehousing performance. Amazon Redshift's query execution blends database operators in each query fragment via code generation. It combines prefetching and vectorized execution with code generation to achieve maximum efficiency. This allows Amazon Redshift to scale linearly when processing from a few terabytes to petabytes of data. Figure 2 depicts the total execution time of the Cloud Data Warehouse Benchmark Derived from TPC-DS 2.13 [6] while scaling dataset size and hardware simultaneously. Amazon Redshift's performance remains nearly flat for a given ratio of data to hardware, as data volume increases from 30TB to 1PB. This linear scaling to the petabyte scale makes it easy, predictable and cost-efficient for customers to on-board new datasets and workloads. Second, customers needed to process more data and wanted to support an increasing number of concurrent users or independent compute clusters that are operating over the Redshift-managed data and the data in Amazon S3. We present Redshift Managed Storage, Redshift's high-performance transactional storage layer, which is disaggregated from the Redshift compute layer and allows a single database to grow to tens of petabytes. We also describe Redshift's compute scaling capabilities. In particular, we present how Redshift can scale up by elastically resizing the size of each cluster, and how Redshift can scale out and increase its throughput via multi-cluster autoscaling, called Concurrency Scaling. With Concurrency Scaling, customers can have thousands of concurrent users executing queries on the same Amazon Redshift endpoint. We also talk about data sharing, which allows users to have multiple isolated compute clusters consume the same datasets in Redshift Managed Storage. Elastic resizing, concurrency scaling and data sharing can be combined giving multiple compute scaling options to the Amazon Redshift customers. Third, as Amazon Redshift became the most widely used cloud data warehouse, its users wanted it to be even easier to use. For that, Redshift introduced ML-based autonomics. We present how Redshift automated among others workload management, physical tuning, the refresh of materialized views (MVs), along with automated MVs-based optimization that rewrites queries to use MVs. We also present how we leverage ML to improve the operational health of the service and deal with gray failures [8]. Finally, as AWS offers a wide range of purpose-built services, Amazon Redshift provides seamless integration with the AWS ecosystem and novel abilities in ingesting and ELTing semistructured data (e.g., JSON) using the PartiQL extension of SQL [9]. AWS purpose-built services include the Amazon S3 object storage, transactional databases (e.g., DynamoDB [5] and Aurora [11]) and the ML services of Amazon Sagemaker. We present how AWS and Redshift make it easy for their customers to use the best service for each job and seamlessly take advantage of Redshift's best of class analytics capabilities. For example, we talk about Redshift Spectrum [3] that allows Redshift to query data in open-file formats in Amazon S3. We present how Redshift facilitates both the in-place querying of data in OLTP services, using Redshift's Federated Querying, as well as the copy of data to Redshift, using Glue Elastic Views. We also present how Redshift can leverage the catabilities of Amazon Sagemaker through SQL and without data movement.
APA, Harvard, Vancouver, ISO, and other styles
42

"HTAP With Reactive Streaming ETL." Journal of Cases on Information Technology 23, no. 4 (October 2021): 0. http://dx.doi.org/10.4018/jcit.20211001oa05.

Full text
Abstract:
In database management systems (DBMSs), query workloads can be classified as online transactional processing (OLTP) or online analytical processing (OLAP). These often run within separate DBMSs. In hybrid transactional and analytical processing (HTAP), both workloads may execute within the same DBMS. This article shows that it is possible to run separate OLTP and OLAP DBMSs, and still support timely business decisions from analytical queries running off fresh transactional data. Several setups to manage OLTP and OLAP workloads are analysed. Then, benchmarks on two industry standard DBMSs empirically show that, under an OLTP workload, a row-store DBMS sustains a 1000 times higher throughput than a columnar DBMS, whilst OLAP queries are more than 4 times faster on a columnar DBMS. Finally, a reactive streaming ETL pipeline is implemented which connects these two DBMSs. Separate benchmarks show that OLTP events can be streamed to an OLAP database within a few seconds.
APA, Harvard, Vancouver, ISO, and other styles
43

"Initial Optimization Techniques for the Cube Algebra Query Language." International Journal of Data Warehousing and Mining 18, no. 1 (January 2022): 0. http://dx.doi.org/10.4018/ijdwm.299016.

Full text
Abstract:
A common model used in addressing today's overwhelming amounts of data is the OLAP Cube. The OLAP community has proposed several cube algebras, although a standard has still not been nominated. This study focuses on a recent addition to the cube algebras: the user-centric Cube Algebra Query Language (CAQL). The study aims to explore the optimization potential of this algebra by applying logical rewriting inspired by classic relational algebra and parallelism. The lack of standard algebra is often cited as a problem in such discussions. Thus, the significance of this work is that of strengthening the position of this algebra within the OLAP algebras by addressing implementation details. The modern open-source PostgreSQL relational engine is used to encode the CAQL abstraction. A query workload based on a well-known dataset is adopted, and CAQL and SQL implementations are compared. Finally, the quality of the query created is evaluated through the observed performance characteristics of the query. Results show strong improvements over the baseline case of the unoptimized query.
APA, Harvard, Vancouver, ISO, and other styles
44

Beier, Felix, and Knut Stolze. "Architecture of a data analytics service in hybrid cloud environments." it - Information Technology 59, no. 3 (January 1, 2017). http://dx.doi.org/10.1515/itit-2016-0050.

Full text
Abstract:
AbstractDB2 for z/OS is the backbone of many transactional systems in the world. IBM DB2 Analytics Accelerator (IDAA) is IBM's approach to enhance DB2 for z/OS with very fast processing of OLAP and analytical SQL workload. While IDAA was originally designed as an appliance to be connected directly to System z, the trend in the IT industry is towards cloud environments. That offers a broad range of tools for analytical data processing tasks.This article presents the architecture for offering a hybrid IDAA, which continues the seamless integration with DB2 for z/OS and now also runs as a specialty engine in cloud environments. Both approaches have their merit and will remain important for customers in the next years. The specific challenges for accelerating query processing for relational data in the cloud are highlighted. Specialized hardware options are not readily available, and that has a direct impact on the system architecture, the offered functionality and its implementation.
APA, Harvard, Vancouver, ISO, and other styles
45

Suharjito, Suharjito, and Adrianus B. Kurnadi. "OLTP PERFORMANCE IMPROVEMENT USING FILE-SYSTEMS LAYER COMPRESSION." Jurnal Teknologi 79, no. 4 (April 27, 2017). http://dx.doi.org/10.11113/jt.v79.8883.

Full text
Abstract:
Database for Online Transaction Processing (OLTP) application is used by almost every corporations that has adopted computerisation to support their operational day to day business. Compression in the storage or file-systems layer has not been widely adopted for OLTP database because of the concern that it might decrease database performance. OLTP compression in the database layer is available commercially but it has a significant licence cost that reduces the cost saving of compression. In this research, transparent file-system compression with LZ4, LZJB and ZLE algorithm have been tested to improve performance of OLTP application. Using Swing-bench as the benchmark tool and Oracle database 12c, The result indicated that on OLTP workload, LZJB was the most optimal compression algorithm with performance improvement up to 49% and consistent reduction of maximum response time and CPU utilisation overhead, while LZ4 was the compression with the highest compression ratio and ZLE was the compression with the lowest CPU utilisation overhead. In terms of compression ratio, LZ4 can deliver the highest compression ratio which is 5.32, followed by LZJB, 4.92; and ZLE, 1.76. Furthermore, it is found that there is indeed a risk of reduced performance and/or an increase of maximum response time.
APA, Harvard, Vancouver, ISO, and other styles
46

TOSUN, Samet, and İbrahim YILMAZ. "MEASURING THE MENTAL WORKLOAD OF SPECIALISTS AND PRACTITIONERS WITH THE CARMEN-Q METHOD AND EVALUATING THE DIFFERENCES." Ergonomi, July 13, 2023. http://dx.doi.org/10.33439/ergonomi.1272038.

Full text
Abstract:
Zihinsel iş yükü, görevin yerine getirilmesini sağlayan, performans beklentilerini karşılamak için gerekli bilgi işleme kapasitesi ile belirlenmiş bir zaman aralığında gerçekleştirilmesi mümkün olan kapasite arasındaki fark olarak değerlendirilir. Yapılan bu çalışmada Tokat ve Sivas illerinde çalışan 68 uzman ve pratisyen hekimin zihinsel iş yüklerinin değerlendirilmesi, uzman ve pratisyen hekimlerin iş yüklerinin karşılaştırılması, uzman hekimlerin cerrahi, dâhili ve temel tıp bölümlerinde anlamlı farklılık gösterip göstermediğinin belirlenmesi amaçlanmıştır. Bu çalışmaya katılan hekimlere çevrimiçi anket uygulanmış, anketlerden elde edilen verilerin değerlendirilme sürecinde içerik analizi yapılmıştır. Çalışmada kullanılan anketin soruları CarMen-Q Zihinsel İş Yükü Ölçeğinden yararlanılarak hazırlanmıştır. Ölçüm metodu bilişsel iş yükü, geçici iş yükü, performansa bağlı iş yükü ve duygusal iş yükünün yer aldığı 4 alt boyuttan ve 29 maddeden oluşmaktadır. Çalışmanın iç tutarlılığına Cronbach’s Alpha katsayısı yardımı ile bakılmış, Cronbach’s Alpha iç tutarlılık katsayısı a=0.96 olarak hesaplanmıştır. Hekimlerin en yüksek zihinsel iş yükü alt boyutunun performansa bağlı iş yükü olduğu, en düşük alt boyutun geçici iş yükü olduğu tespit edilmiştir. Çalışma, literatürde hekimler üzerinde zihinsel iş yükü değerlendirme yöntemi olan CarMen-Q metodunun yer aldığı ilk çalışmadır.
APA, Harvard, Vancouver, ISO, and other styles
47

Bang, Tiemo, Norman May, Ilia Petrov, and Carsten Binnig. "The full story of 1000 cores." VLDB Journal, April 29, 2022. http://dx.doi.org/10.1007/s00778-022-00742-4.

Full text
Abstract:
AbstractIn our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
APA, Harvard, Vancouver, ISO, and other styles
48

Jibril, Muhammad Attahir, Philipp Götze, David Broneske, and Kai-Uwe Sattler. "Selective caching: a persistent memory approach for multi-dimensional index structures." Distributed and Parallel Databases, March 14, 2021. http://dx.doi.org/10.1007/s10619-021-07327-0.

Full text
Abstract:
AbstractAfter the introduction of Persistent Memory in the form of Intel’s Optane DC Persistent Memory on the market in 2019, it has found its way into manifold applications and systems. As Google and other cloud infrastructure providers are starting to incorporate Persistent Memory into their portfolio, it is only logical that cloud applications have to exploit its inherent properties. Persistent Memory can serve as a DRAM substitute, but guarantees persistence at the cost of compromised read/write performance compared to standard DRAM. These properties particularly affect the performance of index structures, since they are subject to frequent updates and queries. However, adapting each and every index structure to exploit the properties of Persistent Memory is tedious. Hence, we require a general technique that hides this access gap, e.g., by using DRAM caching strategies. To exploit Persistent Memory properties for analytical index structures, we propose selective caching. It is based on a mixture of dynamic and static caching of tree nodes in DRAM to reach near-DRAM access speeds for index structures. In this paper, we evaluate selective caching on the OLAP-optimized main-memory index structure Elf, because its memory layout allows for an easy caching. Our experiments show that if configured well, selective caching with a suitable replacement strategy can keep pace with pure DRAM storage of Elf while guaranteeing persistence. These results are also reflected when selective caching is used for parallel workloads.
APA, Harvard, Vancouver, ISO, and other styles
49

"LRU-C: Parallelizing Database I/Os for Flash SSDs." Proceedings of the VLDB Endowment 16, no. 9 (May 2023): 2364–76. http://dx.doi.org/10.14778/3598581.3598605.

Full text
Abstract:
The conventional database buffer managers have two inherent sources of I/O serialization: read stall and mutex conflict. The serialized I/O makes storage and CPU under-utilized, limiting transaction throughput and latency. Such harm stands out on flash SSDs with asymmetric read-write speed and abundant I/O parallelism. To make database I/Os parallel and thus leverage the parallelism in flash SSDs, we propose a novel approach to database buffering, the LRU-C method. It introduces the LRU-C pointer that points to the least-recently-used-clean page in the LRU list. Upon a page miss, LRU-C selects the current LRU-clean page as a victim and adjusts the pointer to the next LRU-clean one in the LRU list. This way, LRU-C can avoid the I/O serialization of read stalls. The LRU-C pointer enables two further optimizations for higher I/O throughput: dynamic-batch-write and parallel LRU-list manipulation. The former allows the background flusher to write more dirty pages at a time, while the latter mitigates mutex-induced I/O serializations. Experiment results from running OLTP workloads using MySQL-based LRU-C prototype on flash SSDs show that it improves transaction throughput compared to the Vanilla MySQL and the state-of-the-art WAR solution by 3x and 1.52x, respectively, and also cuts the tail latency drastically. Though LRU-C might compromise the hit ratio slightly, its increased I/O throughput far offsets the reduced hit ratio.
APA, Harvard, Vancouver, ISO, and other styles
50

CİHAN, Murat, and Muhammed Fevzi KILINÇKAYA. "EFFECT OF THE PANDEMIC ON THE TURNAROUND TIME INTERVALS IN THE PUBLIC HEALTH LABORATORY." Acibadem Universitesi Saglik Bilimleri Dergisi 13, no. 4 (October 1, 2022). http://dx.doi.org/10.31067/acusaglik.1161337.

Full text
Abstract:
Introduction Turnaround time is one of the most important signs of a laboratory service which many clinicians use to evaluate the quality of the laboratory. Pandemic has enlightened the importance of laboratory medicine in healthcare organizations. Each step in total testing process can be affected by errors essential in laboratory medicine. Our study aims to evaluate the impact of the COVID-19 pandemic on turnaround time. Material and Methods We evaluated turnaround time periods of the routine biochemistry, immunoassay, hematology, hemoglobinopathies, HbA1c and blood-typing. In our study, intra-laboratory turnaround time, which is starting from sample acceptance time to results’ verification time is determined. Defined turnaround time duration for all type of analytes are 1440 min. Time intervals in study as listed; Group 1 (pre-pandemic stage), Group 2 (pandemic stage), and Group 3 (post-pandemic stage). Frequency of samples with a TAT exceeded the laboratory’s cutoff time interval was determined and compared within groups. Results The percentage of exceeded turnaround time of all analytes, except blood typing, hematology and HbA1c in the Group 1 are significantly lower than other groups. With regards to comparing Group 2 and Group 3, percentage of exceeded turnaround times of HbA1c and hematology samples in the Group 3 are found significantly lower than the Group 2 Discussion: Turnaround time can be evaluated as a benchmark of the laboratory performance. Workload of the laboratories should be taken into consideration is specific situations, like pandemic. Giriş Test istem sonuç süresi, laboratuvar kalitesini değerlendirme amaçlı, çoğu klinisyenin kullandığı önemli bir parametredir. Pandemi dönemi, sağlık hizmeti organizasyonlarında laboratuvar tıbbının önemini bir kez daha göstermiştir. Toplam test sürecindeki her bir basamak, laboratuvar tıbbında önemli olan hatalardan etkilenebilmektedir. Çalışmamızın amacı test istem sonuç süresine COVID-19 pandemisinin etkisini göstermektir. Materyal ve Metot Rutin biyokimya, immünassay, hematoloji, hemoglobinopati değerlendirmesi, HbA1c ve kan gruplama parametrelerindeki test istem sonuç süresi değerlendirilmiştir. Çalışmamızda, örneğin kabul zamanı ile sonuçların onaylanma süresi arasındaki fark olarak da bilinen, laboratuvar içi test istem sonuç süresi kullanılmıştır. Laboratuvarımızda belirlenen test istem sonuç süresi, 1440 dk'dır. Çalışma grubundaki zaman aralıkları; Grup 1 (Pandemi öncesi dönem), Grup 2 (Pandemi dönemi) ve Grup 3 (Pandemi sonrası dönem) olarak gruplandırılmıştır. Laboratuvarın belirlediği test istem sonuç süresini aşan örneklerin sıklığı belirlenmiş ve gruplar arası karşılaştırması yapılmıştır. Sonuçlar Grup 1'deki Kan grubu, hematoloji ve HbA1c analizleri dışındaki diğer analizlerdeki test istem sonuç süresini aşan numune sıklıkları, diğer gruplara göre daha düşüktür. Grup 2 ve Grup 3 karşılaştırıldığında, HbA1c ve hematoloji örneklerindeki test istem sonuç süresi aşma sıklığı, Grup 3'de anlamlı düzeyde düşüktür. Tartışma: Test istem sonuç süresi, laboratuvar performansının bir belirteci olarak değerlendirilebilir. Laboratuvarların iş yükü, pandemi gibi spesifik durumlarda göz önünde bulundurulmalıdır.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography