Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Cardinality Estimation in Database Systems.

Artykuły w czasopismach na temat „Cardinality Estimation in Database Systems”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Cardinality Estimation in Database Systems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kwon, Suyong, Woohwan Jung i Kyuseok Shim. "Cardinality estimation of approximate substring queries using deep learning". Proceedings of the VLDB Endowment 15, nr 11 (lipiec 2022): 3145–57. http://dx.doi.org/10.14778/3551793.3551859.

Pełny tekst źródła
Streszczenie:
Cardinality estimation of an approximate substring query is an important problem in database systems. Traditional approaches build a summary from the text data and estimate the cardinality using the summary with some statistical assumptions. Since deep learning models can learn underlying complex data patterns effectively, they have been successfully applied and shown to outperform traditional methods for cardinality estimations of queries in database systems. However, since they are not yet applied to approximate substring queries, we investigate a deep learning approach for cardinality estimation of such queries. Although the accuracy of deep learning models tends to improve as the train data size increases, producing a large train data is computationally expensive for cardinality estimation of approximate substring queries. Thus, we develop efficient train data generation algorithms by avoiding unnecessary computations and sharing common computations. We also propose a deep learning model as well as a novel learning method to quickly obtain an accurate deep learning-based estimator. Extensive experiments confirm the superiority of our data generation algorithms and deep learning model with the novel learning method.
Style APA, Harvard, Vancouver, ISO itp.
2

Qi, Kaiyang, Jiong Yu i Zhenzhen He. "A Cardinality Estimator in Complex Database Systems Based on TreeLSTM". Sensors 23, nr 17 (23.08.2023): 7364. http://dx.doi.org/10.3390/s23177364.

Pełny tekst źródła
Streszczenie:
Cardinality estimation is critical for database management systems (DBMSs) to execute query optimization tasks, which can guide the query optimizer in choosing the best execution plan. However, traditional cardinality estimation methods cannot provide accurate estimates because they cannot accurately capture the correlation between multiple tables. Several recent studies have revealed that learning-based cardinality estimation methods can address the shortcomings of traditional methods and provide more accurate estimates. However, the learning-based cardinality estimation methods still have large errors when an SQL query involves multiple tables or is very complex. To address this problem, we propose a sampling-based tree long short-term memory (TreeLSTM) neural network to model queries. The proposed model addresses the weakness of traditional methods when no sampled tuples match the predicates and considers the join relationship between multiple tables and the conjunction and disjunction operations between predicates. We construct subexpressions as trees using operator types between predicates and improve the performance and accuracy of cardinality estimation by capturing the join-crossing correlations between tables and the order dependencies between predicates. In addition, we construct a new loss function to overcome the drawback that Q-error cannot distinguish between large and small cardinalities. Extensive experimental results from real-world datasets show that our proposed model improves the estimation quality and outperforms traditional cardinality estimation methods and the other compared deep learning methods in three evaluation metrics: Q-error, MAE, and SMAPE.
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Jeremy, Yuqing Huang, Mushi Wang, Semih Salihoglu i Kenneth Salem. "Accurate Summary-based Cardinality Estimation Through the Lens of Cardinality Estimation Graphs". ACM SIGMOD Record 52, nr 1 (7.06.2023): 94–102. http://dx.doi.org/10.1145/3604437.3604458.

Pełny tekst źródła
Streszczenie:
We study two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins: (i) optimistic estimators, which were defined in the context of graph database management systems, that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs (LPs). We show that optimistic estimators can be modeled as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains subqueries as nodes and edges whose weights are average degree statistics. We show that existing optimistic estimators have either undefined or fixed choices for picking CEG paths as their estimates and ignore alternative choices. Instead, we outline a space of optimistic estimators to make an estimate on CEGs, which subsumes existing estimators. We show, using an extensive empirical analysis, that effective paths depend on the structure of the queries. We next show that optimistic estimators and seemingly disparate LP-based pessimistic estimators are in fact connected. Specifically, we show that CEGs can also model some recent pessimistic estimators. This connection allows us to provide insights into the pessimistic estimators, such as showing that they have combinatorial solutions.
Style APA, Harvard, Vancouver, ISO itp.
4

Chen, Jeremy, Yuqing Huang, Mushi Wang, Semih Salihoglu i Ken Salem. "Accurate summary-based cardinality estimation through the lens of cardinality estimation graphs". Proceedings of the VLDB Endowment 15, nr 8 (kwiecień 2022): 1533–45. http://dx.doi.org/10.14778/3529337.3529339.

Pełny tekst źródła
Streszczenie:
This paper is an experimental and analytical study of two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins in the context of graph database management systems: (i) optimistic estimators that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs (LPs). We begin by analyzing how optimistic estimators use pre-computed statistics to generate cardinality estimates. We show these estimators can be modeled as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains sub-queries as nodes and edges whose weights are average degree statistics. We show that existing optimistic estimators have either undefined or fixed choices for picking CEG paths as their estimates and ignore alternative choices. Instead, we outline a space of optimistic estimators to make an estimate on CEGs, which subsumes existing estimators. We show, using an extensive empirical analysis, that effective paths depend on the structure of the queries. While on acyclic queries and queries with small-size cycles, using the maximum-weight path is effective to address the well known underestimation problem, on queries with larger cycles these estimates tend to overestimate, which can be addressed by using minimum weight paths. We next show that optimistic estimators and seemingly disparate LP-based pessimistic estimators are in fact connected. Specifically, we show that CEGs can also model some recent pessimistic estimators. This connection allows us to adopt an optimization from pessimistic estimators to optimistic ones, and provide insights into the pessimistic estimators, such as showing that they have combinatorial solutions.
Style APA, Harvard, Vancouver, ISO itp.
5

Lan, Hai, Zhifeng Bao i Yuwei Peng. "A Survey on Advancing the DBMS Query Optimizer: Cardinality Estimation, Cost Model, and Plan Enumeration". Data Science and Engineering 6, nr 1 (15.01.2021): 86–101. http://dx.doi.org/10.1007/s41019-020-00149-7.

Pełny tekst źródła
Streszczenie:
AbstractQuery optimizer is at the heart of the database systems. Cost-based optimizer studied in this paper is adopted in almost all current database systems. A cost-based optimizer introduces a plan enumeration algorithm to find a (sub)plan, and then uses a cost model to obtain the cost of that plan, and selects the plan with the lowest cost. In the cost model, cardinality, the number of tuples through an operator, plays a crucial role. Due to the inaccuracy in cardinality estimation, errors in cost model, and the huge plan space, the optimizer cannot find the optimal execution plan for a complex query in a reasonable time. In this paper, we first deeply study the causes behind the limitations above. Next, we review the techniques used to improve the quality of the three key components in the cost-based optimizer, cardinality estimation, cost model, and plan enumeration. We also provide our insights on the future directions for each of the above aspects.
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Xiaoying, Changbo Qu, Weiyuan Wu, Jiannan Wang i Qingqing Zhou. "Are we ready for learned cardinality estimation?" Proceedings of the VLDB Endowment 14, nr 9 (maj 2021): 1640–54. http://dx.doi.org/10.14778/3461535.3461552.

Pełny tekst źródła
Streszczenie:
Cardinality estimation is a fundamental but long unresolved problem in query optimization. Recently, multiple papers from different research groups consistently report that learned models have the potential to replace existing cardinality estimators. In this paper, we ask a forward-thinking question: Are we ready to deploy these learned cardinality models in production? Our study consists of three main parts. Firstly, we focus on the static environment (i.e., no data updates) and compare five new learned methods with nine traditional methods on four real-world datasets under a unified workload setting. The results show that learned models are indeed more accurate than traditional methods, but they often suffer from high training and inference costs. Secondly, we explore whether these learned models are ready for dynamic environments (i.e., frequent data updates). We find that they cannot catch up with fast data updates and return large errors for different reasons. For less frequent updates, they can perform better but there is no clear winner among themselves. Thirdly, we take a deeper look into learned models and explore when they may go wrong. Our results show that the performance of learned methods can be greatly affected by the changes in correlation, skewness, or domain size. More importantly, their behaviors are much harder to interpret and often unpredictable. Based on these findings, we identify two promising research directions (control the cost of learned models and make learned models trustworthy) and suggest a number of research opportunities. We hope that our study can guide researchers and practitioners to work together to eventually push learned cardinality estimators into real database systems.
Style APA, Harvard, Vancouver, ISO itp.
7

Ahad, Rafiul, K. V. Bapa i Dennis McLeod. "On estimating the cardinality of the projection of a database relation". ACM Transactions on Database Systems 14, nr 1 (marzec 1989): 28–40. http://dx.doi.org/10.1145/62032.62034.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Mukkamala, Ravi, i Sushil Jajodia. "A note on estimating the cardinality of the projection of a database relation". ACM Transactions on Database Systems 16, nr 3 (wrzesień 1991): 564–66. http://dx.doi.org/10.1145/111197.111218.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Guoliang, Xuanhe Zhou, Ji Sun, Xiang Yu, Yue Han, Lianyuan Jin, Wenbo Li, Tianqing Wang i Shifu Li. "openGauss". Proceedings of the VLDB Endowment 14, nr 12 (lipiec 2021): 3028–42. http://dx.doi.org/10.14778/3476311.3476380.

Pełny tekst źródła
Streszczenie:
Although learning-based database optimization techniques have been studied from academia in recent years, they have not been widely deployed in commercial database systems. In this work, we build an autonomous database framework and integrate our proposed learning-based database techniques into an open-source database system openGauss. We propose effective learning-based models to build learned optimizers (including learned query rewrite, learned cost/cardinality estimation, learned join order selection and physical operator selection) and learned database advisors (including self-monitoring, self-diagnosis, self-configuration, and self-optimization). We devise an effective validation model to validate the effectiveness of learned models. We build effective training data management and model management platforms to easily deploy learned models. We have evaluated our techniques on real-world datasets and the experimental results validated the effectiveness of our techniques. We also provide our learnings of deploying learning-based techniques.
Style APA, Harvard, Vancouver, ISO itp.
10

Woltmann, Lucas, Dominik Olwig, Claudio Hartmann, Dirk Habich i Wolfgang Lehner. "PostCENN". Proceedings of the VLDB Endowment 14, nr 12 (lipiec 2021): 2715–18. http://dx.doi.org/10.14778/3476311.3476327.

Pełny tekst źródła
Streszczenie:
In this demo, we present PostCENN , an enhanced PostgreSQL database system with an end-to-end integration of machine learning (ML) models for cardinality estimation. In general, cardinality estimation is a topic with a long history in the database community. While traditional models like histograms are extensively used, recent works mainly focus on developing new approaches using ML models. However, traditional as well as ML models have their own advantages and disadvantages. With PostCENN , we aim to combine both to maximize their potentials for cardinality estimation by introducing ML models as a novel means to increase the accuracy of the cardinality estimation for certain parts of the database schema. To achieve this, we integrate ML models as first class citizen in PostgreSQL with a well-defined end-to-end life cycle. This life cycle consists of creating ML models for different sub-parts of the database schema, triggering the training, using ML models within the query optimizer in a transparent way, and deleting ML models.
Style APA, Harvard, Vancouver, ISO itp.
11

Rusu, Florin, Zixuan Zhuang, Mingxi Wu i Chris Jermaine. "Workload-Driven Antijoin Cardinality Estimation". ACM Transactions on Database Systems 40, nr 3 (23.10.2015): 1–41. http://dx.doi.org/10.1145/2818178.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Suciu, Dan. "Technical Perspective: Accurate Summary-based Cardinality Estimation Through the Lens of Cardinality Estimation Graphs". ACM SIGMOD Record 52, nr 1 (7.06.2023): 93. http://dx.doi.org/10.1145/3604437.3604457.

Pełny tekst źródła
Streszczenie:
Query engines are really good at choosing an efficient query plan. Users don't need to worry about how they write their query, since the optimizer makes all the right choices for executing the query, while taking into account all aspects of data, such as its size, the characteristics of the storage device, the distribution pattern, the availability of indexes, and so on. The query optimizer always makes the best choice, no matter how complex the query is, or how contrived it was written. Or, this is what we expect today from a modern query optimizer. Unfortunately, reality is not as nice.
Style APA, Harvard, Vancouver, ISO itp.
13

Grigorev, Y. A., i O. Yu Pluzhnikova. "ESTIMATION OF ATTRIBUTE VALUES IN JOIN TABLES WHILE OPTIMIZING RELATION-AL DATABASE QUERY". Informatika i sistemy upravleniya, nr 1 (2021): 3–18. http://dx.doi.org/10.22250/isu.2021.67.3-18.

Pełny tekst źródła
Streszczenie:
The article analyzes the problem of estimating join tables cardinality in the process of calculating the cost of relational database query plan. A new algorithm for estimating the distinct values of attributes is proposed. The algorithm allows reducing inaccuracy in cardinality estimation. The consistency of proposed algorithm is proved.
Style APA, Harvard, Vancouver, ISO itp.
14

Jie, Xu, Lan Haoliang, Ding Wei i Ju Ao. "Network Host Cardinality Estimation Based on Artificial Neural Network". Security and Communication Networks 2022 (24.03.2022): 1–14. http://dx.doi.org/10.1155/2022/1258482.

Pełny tekst źródła
Streszczenie:
Cardinality estimation plays an important role in network security. It is widely used in host cardinality calculation of high-speed network. However, the cardinality estimation algorithm itself is easy to be disturbed by random factors and produces estimation errors. How to eliminate the influence of these random factors is the key to further improving the accuracy of estimation. To solve the above problems, this paper proposes an algorithm that uses artificial neural network to predict the estimation bias and adjust the cardinality estimation value according to the prediction results. Based on the existing algorithms, the novel algorithm reduces the interference of random factors on the estimation results and improves the accuracy by adding the steps of cardinality estimation sampling, artificial neural network training, and error prediction. The experimental results show that, using the cardinality estimation algorithm proposed in this paper, the average absolute deviation of cardinality estimation can be reduced by more than 20%.
Style APA, Harvard, Vancouver, ISO itp.
15

Qian, Chen, Hoilun Ngan, Yunhao Liu i Lionel M. Ni. "Cardinality Estimation for Large-Scale RFID Systems". IEEE Transactions on Parallel and Distributed Systems 22, nr 9 (wrzesień 2011): 1441–54. http://dx.doi.org/10.1109/tpds.2011.36.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Ré, Christopher, i D. Suciu. "Understanding cardinality estimation using entropy maximization". ACM Transactions on Database Systems 37, nr 1 (luty 2012): 1–31. http://dx.doi.org/10.1145/2109196.2109202.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Sakr, Sherif. "Algebra‐based XQuery cardinality estimation". International Journal of Web Information Systems 4, nr 1 (4.04.2008): 6–47. http://dx.doi.org/10.1108/17440080810865611.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Shah-Mansouri, Vahid, i Vincent W. S. Wong. "Cardinality Estimation in RFID Systems with Multiple Readers". IEEE Transactions on Wireless Communications 10, nr 5 (maj 2011): 1458–69. http://dx.doi.org/10.1109/twc.2011.030411.100390.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Varagnolo, Damiano, Gianluigi Pillonetto i Luca Schenato. "Distributed Cardinality Estimation in Anonymous Networks". IEEE Transactions on Automatic Control 59, nr 3 (marzec 2014): 645–59. http://dx.doi.org/10.1109/tac.2013.2287113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Liu, Jie, Wenqian Dong, Qingqing Zhou i Dong Li. "Fauce". Proceedings of the VLDB Endowment 14, nr 11 (lipiec 2021): 1950–63. http://dx.doi.org/10.14778/3476249.3476254.

Pełny tekst źródła
Streszczenie:
Cardinality estimation is a fundamental and critical problem in databases. Recently, many estimators based on deep learning have been proposed to solve this problem and they have achieved promising results. However, these estimators struggle to provide accurate results for complex queries, due to not capturing real inter-column and inter-table correlations. Furthermore, none of these estimators contain the uncertainty information about their estimations. In this paper, we present a join cardinality estimator called Fauce. Fauce learns the correlations across all columns and all tables in the database. It also contains the uncertainty information of each estimation. Among all studied learned estimators, our results are promising: (1) Fauce is a light-weight estimator, it has 10× faster inference speed than the state of the art estimator; (2) Fauce is robust to the complex queries, it provides 1.3×--6.7× smaller estimation errors for complex queries compared with the state of the art estimator; (3) To the best of our knowledge, Fauce is the first estimator that incorporates uncertainty information for cardinality estimation into a deep learning model.
Style APA, Harvard, Vancouver, ISO itp.
21

Potoniec, Jedrzej. "Mining Cardinality Restrictions in OWL". Foundations of Computing and Decision Sciences 45, nr 3 (1.09.2020): 195–216. http://dx.doi.org/10.2478/fcds-2020-0011.

Pełny tekst źródła
Streszczenie:
AbstractWe present an approach to mine cardinality restriction axioms from an existing knowledge graph, in order to extend an ontology describing the graph. We compare frequency estimation with kernel density estimation as approaches to obtain the cardinalities in restrictions. We also propose numerous strategies for filtering obtained axioms in order to make them more available for the ontology engineer. We report the results of experimental evaluation on DBpedia 2016-10 and show that using kernel density estimation to compute the cardinalities in cardinality restrictions yields more robust results that using frequency estimation. We also show that while filtering is of limited usability for minimum cardinality restrictions, it is much more important for maximum cardinality restrictions. The presented findings can be used to extend existing ontology engineering tools in order to support ontology construction and enable more efficient creation of knowledge-intensive artificial intelligence systems.
Style APA, Harvard, Vancouver, ISO itp.
22

Ngo, Hung Q. "RECENT RESULTS ON CARDINALITY ESTIMATION AND INFORMATION THEORETIC INEQUALITIES". Journal of Computer Science and Cybernetics 37, nr 3 (7.10.2021): 223–38. http://dx.doi.org/10.15625/1813-9663/37/3/16129.

Pełny tekst źródła
Streszczenie:
I would like to dedicate this little exposition to Prof. Phan Dinh Dieu, one of the giants and pioneers of Mathematics in Computer Science in Vietnam. In the past 15 years or so, new and exciting connections between fundamental problems in database theory and information theory have emerged. There are several angles one can take to describe this connection. This paper takes one such angle, influenced by the author's own bias and research results. In particular, we will describe how the cardinality estimation problem -- a corner-stone problem for query optimizers -- is deeply connected to information theoretic inequalities. Furthermore, we explain how inequalities can also be used to derive a couple of classic geometric inequalities such as the Loomis-Whitney inequality. A purpose of the article is to introduce the reader to these new connections, where theory and practice meet in a wonderful way. Another objective is to point the reader to a research area with many new open questions.
Style APA, Harvard, Vancouver, ISO itp.
23

Zheng, Yuanqing, i Mo Li. "Towards More Efficient Cardinality Estimation for Large-Scale RFID Systems". IEEE/ACM Transactions on Networking 22, nr 6 (grudzień 2014): 1886–96. http://dx.doi.org/10.1109/tnet.2013.2288352.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Hou, Yuxiao, Jiajue Ou, Yuanqing Zheng i Mo Li. "PLACE: Physical Layer Cardinality Estimation for Large-Scale RFID Systems". IEEE/ACM Transactions on Networking 24, nr 5 (październik 2016): 2702–14. http://dx.doi.org/10.1109/tnet.2015.2481999.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Park, Jonghoon, Cheoleun Moon, Ikjun Yeom i Yusung Kim. "Cardinality estimation using collective interference for large-scale RFID systems". Journal of Network and Computer Applications 83 (kwiecień 2017): 101–10. http://dx.doi.org/10.1016/j.jnca.2017.01.037.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Woltmann, Lucas, Claudio Hartmann, Dirk Habich i Wolfgang Lehner. "Aggregate-based Training Phase for ML-based Cardinality Estimation". Datenbank-Spektrum 22, nr 1 (10.01.2022): 45–57. http://dx.doi.org/10.1007/s13222-021-00400-z.

Pełny tekst źródła
Streszczenie:
AbstractCardinality estimation is a fundamental task in database query processing and optimization. As shown in recent papers, machine learning (ML)-based approaches may deliver more accurate cardinality estimations than traditional approaches. However, a lot of training queries have to be executed during the model training phase to learn a data-dependent ML model making it very time-consuming. Many of those training or example queries use the same base data, have the same query structure, and only differ in their selective predicates. To speed up the model training phase, our core idea is to determine a predicate-independent pre-aggregation of the base data and to execute the example queries over this pre-aggregated data. Based on this idea, we present a specific aggregate-based training phase for ML-based cardinality estimation approaches in this paper. As we are going to show with different workloads in our evaluation, we are able to achieve an average speedup of 90 with our aggregate-based training phase and thus outperform indexes.
Style APA, Harvard, Vancouver, ISO itp.
27

Shironoshita, E. Patrick, Michael T. Ryan i Mansur R. Kabuka. "Cardinality estimation for the optimization of queries on ontologies". ACM SIGMOD Record 36, nr 2 (czerwiec 2007): 13–18. http://dx.doi.org/10.1145/1328854.1328856.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Papapetrou, Odysseas, Wolf Siberski i Wolfgang Nejdl. "Cardinality estimation and dynamic length adaptation for Bloom filters". Distributed and Parallel Databases 28, nr 2-3 (2.09.2010): 119–56. http://dx.doi.org/10.1007/s10619-010-7067-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Borovica-Gajic, Renata, Stratos Idreos, Anastasia Ailamaki, Marcin Zukowski i Campbell Fraser. "Smooth Scan: robust access path selection without cardinality estimation". VLDB Journal 27, nr 4 (29.05.2018): 521–45. http://dx.doi.org/10.1007/s00778-018-0507-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Oommen, B. J., i M. Thiyagarajah. "Benchmarking attribute cardinality maps for database systems using the tpc-d specifications". IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 33, nr 6 (grudzień 2003): 913–24. http://dx.doi.org/10.1109/tsmcb.2003.810909.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Li, Guoliang, Xuanhe Zhou i Lei Cao. "Machine learning for databases". Proceedings of the VLDB Endowment 14, nr 12 (lipiec 2021): 3190–93. http://dx.doi.org/10.14778/3476311.3476405.

Pełny tekst źródła
Streszczenie:
Machine learning techniques have been proposed to optimize the databases. For example, traditional empirical database optimization techniques (e.g., cost estimation, join order selection, knob tuning, index and view advisor) cannot meet the high-performance requirement for large-scale database instances, various applications and diversified users, especially on the cloud. Fortunately, machine learning based techniques can alleviate this problem by judiciously selecting optimization strategy. In this tutorial, we categorize database tasks into three typical problems that can be optimized by different machine learning models, including NP-hard problems (e.g., knob space exploration, index/view selection, partition-key recommendation for offline optimization; query rewrite, join order selection for online optimization), regression problems (e.g., cost/cardinality estimation, index/view benefit estimation, query latency prediction), and prediction problems (e.g., query workload prediction). We review existing machine learning based techniques to address these problems and provide research challenges.
Style APA, Harvard, Vancouver, ISO itp.
32

Gong, Wei, Jiangchuan Liu, Kebin Liu i Yunhao Liu. "Toward More Rigorous and Practical Cardinality Estimation for Large-Scale RFID Systems". IEEE/ACM Transactions on Networking 25, nr 3 (czerwiec 2017): 1347–58. http://dx.doi.org/10.1109/tnet.2016.2634551.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Mannino, Michael V., Paicheng Chu i Thomas Sager. "Statistical profile estimation in database systems". ACM Computing Surveys 20, nr 3 (wrzesień 1988): 191–221. http://dx.doi.org/10.1145/62061.62063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Lefons, Ezio, Antonio Merico i Filippo Tangorra. "Analytical profile estimation in database systems". Information Systems 20, nr 1 (marzec 1995): 1–20. http://dx.doi.org/10.1016/0306-4379(95)00001-k.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Chen, Ling, Heqing Huang i Donghui Chen. "Join cardinality estimation by combining operator-level deep neural networks". Information Sciences 546 (luty 2021): 1047–62. http://dx.doi.org/10.1016/j.ins.2020.09.065.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Tian, Ruijie, Weishi Zhang, Fei Wang, Jingchun Zhou, Adi Alhudhaif i Fayadh Alenezi. "Cardinality estimation of activity trajectory similarity queries using deep learning". Information Sciences 646 (październik 2023): 119398. http://dx.doi.org/10.1016/j.ins.2023.119398.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

PILARSKI, DANIEL. "LINGUISTIC SUMMARIZATION OF DATABASES WITH QUANTIRIUS: A REDUCTION ALGORITHM FOR GENERATED SUMMARIES". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 18, nr 03 (czerwiec 2010): 305–31. http://dx.doi.org/10.1142/s0218488510006556.

Pełny tekst źródła
Streszczenie:
This paper deals with the issue of linguistic database summaries understood as linguistically quantified propositions. We show basic capabilities of Quantirius, our interactive system supporting the mining and the assessment of linguistic summaries in a database. The mining of linguistic summaries is realized by means of the concept of a protoform introduced by Zadeh. The assessment of validity degrees of summaries is done via Zadeh's fuzzy logic based calculus, extended by the use of cardinality patterns and triangular norms. The main part of the paper presents an idea of a further processing of the set of generated summaries. The proposed algorithm is composed of a reduction mechanism of summaries based on linguistic terms inclusion and a reduction of summaries by means of the overlapping unimodal linguistic terms. We then present an idea of the approval threshold computing for the truth degrees of summaries. Finally, we show that summaries generated by different protoforms can be joined in a master-detail-like relation, making an interesting structure of information contained in the database.
Style APA, Harvard, Vancouver, ISO itp.
38

Xu, Zichen, Yi-Cheng Tu i Xiaorui Wang. "Online Energy Estimation of Relational Operations in Database Systems". IEEE Transactions on Computers 64, nr 11 (1.11.2015): 3223–36. http://dx.doi.org/10.1109/tc.2015.2394309.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Tomov, N., E. Dempster, M. H. Williams, A. Burger, H. Taylor, P. J. B. King i P. Broughton. "Analytical response time estimation in parallel relational database systems". Parallel Computing 30, nr 2 (luty 2004): 249–83. http://dx.doi.org/10.1016/j.parco.2003.11.003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Ciaccia, Paolo, i Dario Maio. "Access cost estimation for physical database design". Data & Knowledge Engineering 11, nr 2 (październik 1993): 125–50. http://dx.doi.org/10.1016/0169-023x(93)90002-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Slavov, Vasil, i Praveen Rao. "A gossip-based approach for Internet-scale cardinality estimation of XPath queries over distributed semistructured data". VLDB Journal 23, nr 1 (17.05.2013): 51–76. http://dx.doi.org/10.1007/s00778-013-0314-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Nguyen, Chuyen T., Van-Dinh Nguyen i Anh T. Pham. "Tag Cardinality Estimation Using Expectation-Maximization in ALOHA-Based RFID Systems With Capture Effect and Detection Error". IEEE Wireless Communications Letters 8, nr 2 (kwiecień 2019): 636–39. http://dx.doi.org/10.1109/lwc.2018.2890650.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Kadam, Sachin, Chaitanya S. Raut, Aman Deep Meena i Gaurav S. Kasbekar. "Fast node cardinality estimation and cognitive MAC protocol design for heterogeneous machine-to-machine networks". Wireless Networks 26, nr 6 (18.03.2020): 3929–52. http://dx.doi.org/10.1007/s11276-020-02291-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Boulos, Jihad, i Kinji Ono. "Cost estimation of user-defined methods in object-relational database systems". ACM SIGMOD Record 28, nr 3 (wrzesień 1999): 22–28. http://dx.doi.org/10.1145/333607.333610.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Carmeli, Nofar, i Markus Kröll. "On the Enumeration Complexity of Unions of Conjunctive Queries". ACM Transactions on Database Systems 46, nr 2 (czerwiec 2021): 1–41. http://dx.doi.org/10.1145/3450263.

Pełny tekst źródła
Streszczenie:
We study the enumeration complexity of Unions of Conjunctive Queries (UCQs) . We aim to identify the UCQs that are tractable in the sense that the answer tuples can be enumerated with a linear preprocessing phase and a constant delay between every successive tuples. It has been established that, in the absence of self-joins and under conventional complexity assumptions, the CQs that admit such an evaluation are precisely the free-connex ones. A union of tractable CQs is always tractable. We generalize the notion of free-connexity from CQs to UCQs, thus showing that some unions containing intractable CQs are, in fact, tractable. Interestingly, some unions consisting of only intractable CQs are tractable too. We show how to use the techniques presented in this article also in settings where the database contains cardinality dependencies (including functional dependencies and key constraints) or when the UCQs contain disequalities. The question of finding a full characterization of the tractability of UCQs remains open. Nevertheless, we prove that, for several classes of queries, free-connexity fully captures the tractable UCQs.
Style APA, Harvard, Vancouver, ISO itp.
46

Ahluwalia, Rashpal. "Instructional Software for Reliability Estimation and Fault Tree Analysis". Industrial and Systems Engineering Review 1, nr 2 (1.11.2013): 83–109. http://dx.doi.org/10.37266/iser.2013v1i2.pp83-109.

Pełny tekst źródła
Streszczenie:
This paper describes a software tool to introduce fundamental concepts of reliability and fault tree analysis to engineering students. Students can fit common failure distributions to failure data. The data can be complete, singly censored, or multiply censored. The software computes distribution and goodness-of-fit parameters. The students can use the tool to validate hand calculations. Failure distributions and reliability values for various components can be identified and stored in a database. Various components and sub-systems can be used to build series- parallel or complex systems. The components data can also be used to build fault trees. The software tool can compute reliability of complex state independent and state dependent systems. The tool can also be used to compute failure probability of the top node of a fault tree. The software was implemented in Visual Basic with SQL as the database. It operates on the Windows 7 platform.
Style APA, Harvard, Vancouver, ISO itp.
47

Beber, Moritz E., Mattia G. Gollub, Dana Mozaffari, Kevin M. Shebek, Avi I. Flamholz, Ron Milo i Elad Noor. "eQuilibrator 3.0: a database solution for thermodynamic constant estimation". Nucleic Acids Research 50, nr D1 (29.11.2021): D603—D609. http://dx.doi.org/10.1093/nar/gkab1106.

Pełny tekst źródła
Streszczenie:
Abstract eQuilibrator (equilibrator.weizmann.ac.il) is a database of biochemical equilibrium constants and Gibbs free energies, originally designed as a web-based interface. While the website now counts around 1,000 distinct monthly users, its design could not accommodate larger compound databases and it lacked a scalable Application Programming Interface (API) for integration into other tools developed by the systems biology community. Here, we report on the recent updates to the database as well as the addition of a new Python-based interface to eQuilibrator that adds many new features such as a 100-fold larger compound database, the ability to add novel compounds, improvements in speed and memory use, and correction for Mg2+ ion concentrations. Moreover, the new interface can compute the covariance matrix of the uncertainty between estimates, for which we show the advantages and describe the application in metabolic modelling. We foresee that these improvements will make thermodynamic modelling more accessible and facilitate the integration of eQuilibrator into other software platforms.
Style APA, Harvard, Vancouver, ISO itp.
48

Ergaliev, E. K., M. N. Madiyarov, N. M. Oskorbin i L. L. Smolyakova. "Database Reconciliation in Applied Interval Analysis". Izvestiya of Altai State University, nr 1(123) (18.03.2022): 89–94. http://dx.doi.org/10.14258/izvasu(2022)1-14.

Pełny tekst źródła
Streszczenie:
The article deals with the problem of the reconciliation of observation results, which arises when solving problems of interval analysis of a database. It is found that the values of the set of input variables and the output variable are consistent if the graph of the desired dependence is located at the inner points of the interval hyper-rectangle in each observation. In this case, it is proposed to use special solutions of interval systems of linear algebraic equations (ISLAU) to analyze the data of linear processes. However, in real and model conditions, the specified property of the database is not always fulfilled a priori. In these cases, it is proposed to use the principle of robust estimation: inconsistent observations should either be excluded from the sample or adjusted. This paper presents the results of the study of these methods of matching the used experimental database on model linear processes under conditions when the basic assumptions of interval estimation of dependencies are fulfilled. In addition, variant computational experiments have been investigated to reveal the possibility of increasing the accuracy of interval analysis due to preliminary correction of observations, including the possibility of guaranteed estimation of the sought dependences.
Style APA, Harvard, Vancouver, ISO itp.
49

Rapczyński, Michał, Philipp Werner, Sebastian Handrich i Ayoub Al-Hamadi. "A Baseline for Cross-Database 3D Human Pose Estimation". Sensors 21, nr 11 (28.05.2021): 3769. http://dx.doi.org/10.3390/s21113769.

Pełny tekst źródła
Streszczenie:
Vision-based 3D human pose estimation approaches are typically evaluated on datasets that are limited in diversity regarding many factors, e.g., subjects, poses, cameras, and lighting. However, for real-life applications, it would be desirable to create systems that work under arbitrary conditions (“in-the-wild”). To advance towards this goal, we investigated the commonly used datasets HumanEva-I, Human3.6M, and Panoptic Studio, discussed their biases (that is, their limitations in diversity), and illustrated them in cross-database experiments (for which we used a surrogate for roughly estimating in-the-wild performance). For this purpose, we first harmonized the differing skeleton joint definitions of the datasets, reducing the biases and systematic test errors in cross-database experiments. We further proposed a scale normalization method that significantly improved generalization across camera viewpoints, subjects, and datasets. In additional experiments, we investigated the effect of using more or less cameras, training with multiple datasets, applying a proposed anatomy-based pose validation step, and using OpenPose as the basis for the 3D pose estimation. The experimental results showed the usefulness of the joint harmonization, of the scale normalization, and of augmenting virtual cameras to significantly improve cross-database and in-database generalization. At the same time, the experiments showed that there were dataset biases that could not be compensated and call for new datasets covering more diversity. We discussed our results and promising directions for future work.
Style APA, Harvard, Vancouver, ISO itp.
50

Sainati, Tristano, Fiona Zakaria, Giorgio Locatelli, P. Andrew Sleigh i Barbara Evans. "Understanding the costs of urban sanitation: towards a standard costing model". Journal of Water, Sanitation and Hygiene for Development 10, nr 4 (27.10.2020): 642–58. http://dx.doi.org/10.2166/washdev.2020.093.

Pełny tekst źródła
Streszczenie:
Abstract There is a dearth of reliable cost data for urban sanitation. In the absence of high-quality global data, the full cost of sustainable implementation of urban sanitation remains uncertain. This paper proposes an approach for developing bespoke parametric cost estimation models for easy and reliable estimation of the costs of alternative sanitation technologies in a range of geographical contexts. A key requirement for the development of these models is the establishment of a large database of empirical information on the current costs of sanitation systems. Such a database does not currently exist. Two foundational tools are proposed. Firstly, a standard metric for reporting the costs of urban sanitation systems, total annualised cost per household. Secondly, a standardised approach to the collection of empirical cost data, the Novel Ball-Park Reporting Approach (NBPRA). Data from the NBPRA are presented for 87 individual sanitation components from 25 cities in 10 countries. Broad cost ranges for different archetypal systems have been estimated; these currently have high levels of uncertainty. Further work is proposed to collect additional data, build up the global database, and develop parametric cost estimation models with higher reliability.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii