To see the other types of publications on this topic, follow the link: Cardinality Estimation in Database Systems.

Journal articles on the topic 'Cardinality Estimation in Database Systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cardinality Estimation in Database Systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kwon, Suyong, Woohwan Jung, and Kyuseok Shim. "Cardinality estimation of approximate substring queries using deep learning." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 3145–57. http://dx.doi.org/10.14778/3551793.3551859.

Full text
Abstract:
Cardinality estimation of an approximate substring query is an important problem in database systems. Traditional approaches build a summary from the text data and estimate the cardinality using the summary with some statistical assumptions. Since deep learning models can learn underlying complex data patterns effectively, they have been successfully applied and shown to outperform traditional methods for cardinality estimations of queries in database systems. However, since they are not yet applied to approximate substring queries, we investigate a deep learning approach for cardinality estimation of such queries. Although the accuracy of deep learning models tends to improve as the train data size increases, producing a large train data is computationally expensive for cardinality estimation of approximate substring queries. Thus, we develop efficient train data generation algorithms by avoiding unnecessary computations and sharing common computations. We also propose a deep learning model as well as a novel learning method to quickly obtain an accurate deep learning-based estimator. Extensive experiments confirm the superiority of our data generation algorithms and deep learning model with the novel learning method.
APA, Harvard, Vancouver, ISO, and other styles
2

Qi, Kaiyang, Jiong Yu, and Zhenzhen He. "A Cardinality Estimator in Complex Database Systems Based on TreeLSTM." Sensors 23, no. 17 (August 23, 2023): 7364. http://dx.doi.org/10.3390/s23177364.

Full text
Abstract:
Cardinality estimation is critical for database management systems (DBMSs) to execute query optimization tasks, which can guide the query optimizer in choosing the best execution plan. However, traditional cardinality estimation methods cannot provide accurate estimates because they cannot accurately capture the correlation between multiple tables. Several recent studies have revealed that learning-based cardinality estimation methods can address the shortcomings of traditional methods and provide more accurate estimates. However, the learning-based cardinality estimation methods still have large errors when an SQL query involves multiple tables or is very complex. To address this problem, we propose a sampling-based tree long short-term memory (TreeLSTM) neural network to model queries. The proposed model addresses the weakness of traditional methods when no sampled tuples match the predicates and considers the join relationship between multiple tables and the conjunction and disjunction operations between predicates. We construct subexpressions as trees using operator types between predicates and improve the performance and accuracy of cardinality estimation by capturing the join-crossing correlations between tables and the order dependencies between predicates. In addition, we construct a new loss function to overcome the drawback that Q-error cannot distinguish between large and small cardinalities. Extensive experimental results from real-world datasets show that our proposed model improves the estimation quality and outperforms traditional cardinality estimation methods and the other compared deep learning methods in three evaluation metrics: Q-error, MAE, and SMAPE.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Jeremy, Yuqing Huang, Mushi Wang, Semih Salihoglu, and Kenneth Salem. "Accurate Summary-based Cardinality Estimation Through the Lens of Cardinality Estimation Graphs." ACM SIGMOD Record 52, no. 1 (June 7, 2023): 94–102. http://dx.doi.org/10.1145/3604437.3604458.

Full text
Abstract:
We study two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins: (i) optimistic estimators, which were defined in the context of graph database management systems, that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs (LPs). We show that optimistic estimators can be modeled as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains subqueries as nodes and edges whose weights are average degree statistics. We show that existing optimistic estimators have either undefined or fixed choices for picking CEG paths as their estimates and ignore alternative choices. Instead, we outline a space of optimistic estimators to make an estimate on CEGs, which subsumes existing estimators. We show, using an extensive empirical analysis, that effective paths depend on the structure of the queries. We next show that optimistic estimators and seemingly disparate LP-based pessimistic estimators are in fact connected. Specifically, we show that CEGs can also model some recent pessimistic estimators. This connection allows us to provide insights into the pessimistic estimators, such as showing that they have combinatorial solutions.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Jeremy, Yuqing Huang, Mushi Wang, Semih Salihoglu, and Ken Salem. "Accurate summary-based cardinality estimation through the lens of cardinality estimation graphs." Proceedings of the VLDB Endowment 15, no. 8 (April 2022): 1533–45. http://dx.doi.org/10.14778/3529337.3529339.

Full text
Abstract:
This paper is an experimental and analytical study of two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins in the context of graph database management systems: (i) optimistic estimators that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs (LPs). We begin by analyzing how optimistic estimators use pre-computed statistics to generate cardinality estimates. We show these estimators can be modeled as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains sub-queries as nodes and edges whose weights are average degree statistics. We show that existing optimistic estimators have either undefined or fixed choices for picking CEG paths as their estimates and ignore alternative choices. Instead, we outline a space of optimistic estimators to make an estimate on CEGs, which subsumes existing estimators. We show, using an extensive empirical analysis, that effective paths depend on the structure of the queries. While on acyclic queries and queries with small-size cycles, using the maximum-weight path is effective to address the well known underestimation problem, on queries with larger cycles these estimates tend to overestimate, which can be addressed by using minimum weight paths. We next show that optimistic estimators and seemingly disparate LP-based pessimistic estimators are in fact connected. Specifically, we show that CEGs can also model some recent pessimistic estimators. This connection allows us to adopt an optimization from pessimistic estimators to optimistic ones, and provide insights into the pessimistic estimators, such as showing that they have combinatorial solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Lan, Hai, Zhifeng Bao, and Yuwei Peng. "A Survey on Advancing the DBMS Query Optimizer: Cardinality Estimation, Cost Model, and Plan Enumeration." Data Science and Engineering 6, no. 1 (January 15, 2021): 86–101. http://dx.doi.org/10.1007/s41019-020-00149-7.

Full text
Abstract:
AbstractQuery optimizer is at the heart of the database systems. Cost-based optimizer studied in this paper is adopted in almost all current database systems. A cost-based optimizer introduces a plan enumeration algorithm to find a (sub)plan, and then uses a cost model to obtain the cost of that plan, and selects the plan with the lowest cost. In the cost model, cardinality, the number of tuples through an operator, plays a crucial role. Due to the inaccuracy in cardinality estimation, errors in cost model, and the huge plan space, the optimizer cannot find the optimal execution plan for a complex query in a reasonable time. In this paper, we first deeply study the causes behind the limitations above. Next, we review the techniques used to improve the quality of the three key components in the cost-based optimizer, cardinality estimation, cost model, and plan enumeration. We also provide our insights on the future directions for each of the above aspects.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xiaoying, Changbo Qu, Weiyuan Wu, Jiannan Wang, and Qingqing Zhou. "Are we ready for learned cardinality estimation?" Proceedings of the VLDB Endowment 14, no. 9 (May 2021): 1640–54. http://dx.doi.org/10.14778/3461535.3461552.

Full text
Abstract:
Cardinality estimation is a fundamental but long unresolved problem in query optimization. Recently, multiple papers from different research groups consistently report that learned models have the potential to replace existing cardinality estimators. In this paper, we ask a forward-thinking question: Are we ready to deploy these learned cardinality models in production? Our study consists of three main parts. Firstly, we focus on the static environment (i.e., no data updates) and compare five new learned methods with nine traditional methods on four real-world datasets under a unified workload setting. The results show that learned models are indeed more accurate than traditional methods, but they often suffer from high training and inference costs. Secondly, we explore whether these learned models are ready for dynamic environments (i.e., frequent data updates). We find that they cannot catch up with fast data updates and return large errors for different reasons. For less frequent updates, they can perform better but there is no clear winner among themselves. Thirdly, we take a deeper look into learned models and explore when they may go wrong. Our results show that the performance of learned methods can be greatly affected by the changes in correlation, skewness, or domain size. More importantly, their behaviors are much harder to interpret and often unpredictable. Based on these findings, we identify two promising research directions (control the cost of learned models and make learned models trustworthy) and suggest a number of research opportunities. We hope that our study can guide researchers and practitioners to work together to eventually push learned cardinality estimators into real database systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahad, Rafiul, K. V. Bapa, and Dennis McLeod. "On estimating the cardinality of the projection of a database relation." ACM Transactions on Database Systems 14, no. 1 (March 1989): 28–40. http://dx.doi.org/10.1145/62032.62034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mukkamala, Ravi, and Sushil Jajodia. "A note on estimating the cardinality of the projection of a database relation." ACM Transactions on Database Systems 16, no. 3 (September 1991): 564–66. http://dx.doi.org/10.1145/111197.111218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Guoliang, Xuanhe Zhou, Ji Sun, Xiang Yu, Yue Han, Lianyuan Jin, Wenbo Li, Tianqing Wang, and Shifu Li. "openGauss." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3028–42. http://dx.doi.org/10.14778/3476311.3476380.

Full text
Abstract:
Although learning-based database optimization techniques have been studied from academia in recent years, they have not been widely deployed in commercial database systems. In this work, we build an autonomous database framework and integrate our proposed learning-based database techniques into an open-source database system openGauss. We propose effective learning-based models to build learned optimizers (including learned query rewrite, learned cost/cardinality estimation, learned join order selection and physical operator selection) and learned database advisors (including self-monitoring, self-diagnosis, self-configuration, and self-optimization). We devise an effective validation model to validate the effectiveness of learned models. We build effective training data management and model management platforms to easily deploy learned models. We have evaluated our techniques on real-world datasets and the experimental results validated the effectiveness of our techniques. We also provide our learnings of deploying learning-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Woltmann, Lucas, Dominik Olwig, Claudio Hartmann, Dirk Habich, and Wolfgang Lehner. "PostCENN." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2715–18. http://dx.doi.org/10.14778/3476311.3476327.

Full text
Abstract:
In this demo, we present PostCENN , an enhanced PostgreSQL database system with an end-to-end integration of machine learning (ML) models for cardinality estimation. In general, cardinality estimation is a topic with a long history in the database community. While traditional models like histograms are extensively used, recent works mainly focus on developing new approaches using ML models. However, traditional as well as ML models have their own advantages and disadvantages. With PostCENN , we aim to combine both to maximize their potentials for cardinality estimation by introducing ML models as a novel means to increase the accuracy of the cardinality estimation for certain parts of the database schema. To achieve this, we integrate ML models as first class citizen in PostgreSQL with a well-defined end-to-end life cycle. This life cycle consists of creating ML models for different sub-parts of the database schema, triggering the training, using ML models within the query optimizer in a transparent way, and deleting ML models.
APA, Harvard, Vancouver, ISO, and other styles
11

Rusu, Florin, Zixuan Zhuang, Mingxi Wu, and Chris Jermaine. "Workload-Driven Antijoin Cardinality Estimation." ACM Transactions on Database Systems 40, no. 3 (October 23, 2015): 1–41. http://dx.doi.org/10.1145/2818178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Suciu, Dan. "Technical Perspective: Accurate Summary-based Cardinality Estimation Through the Lens of Cardinality Estimation Graphs." ACM SIGMOD Record 52, no. 1 (June 7, 2023): 93. http://dx.doi.org/10.1145/3604437.3604457.

Full text
Abstract:
Query engines are really good at choosing an efficient query plan. Users don't need to worry about how they write their query, since the optimizer makes all the right choices for executing the query, while taking into account all aspects of data, such as its size, the characteristics of the storage device, the distribution pattern, the availability of indexes, and so on. The query optimizer always makes the best choice, no matter how complex the query is, or how contrived it was written. Or, this is what we expect today from a modern query optimizer. Unfortunately, reality is not as nice.
APA, Harvard, Vancouver, ISO, and other styles
13

Grigorev, Y. A., and O. Yu Pluzhnikova. "ESTIMATION OF ATTRIBUTE VALUES IN JOIN TABLES WHILE OPTIMIZING RELATION-AL DATABASE QUERY." Informatika i sistemy upravleniya, no. 1 (2021): 3–18. http://dx.doi.org/10.22250/isu.2021.67.3-18.

Full text
Abstract:
The article analyzes the problem of estimating join tables cardinality in the process of calculating the cost of relational database query plan. A new algorithm for estimating the distinct values of attributes is proposed. The algorithm allows reducing inaccuracy in cardinality estimation. The consistency of proposed algorithm is proved.
APA, Harvard, Vancouver, ISO, and other styles
14

Jie, Xu, Lan Haoliang, Ding Wei, and Ju Ao. "Network Host Cardinality Estimation Based on Artificial Neural Network." Security and Communication Networks 2022 (March 24, 2022): 1–14. http://dx.doi.org/10.1155/2022/1258482.

Full text
Abstract:
Cardinality estimation plays an important role in network security. It is widely used in host cardinality calculation of high-speed network. However, the cardinality estimation algorithm itself is easy to be disturbed by random factors and produces estimation errors. How to eliminate the influence of these random factors is the key to further improving the accuracy of estimation. To solve the above problems, this paper proposes an algorithm that uses artificial neural network to predict the estimation bias and adjust the cardinality estimation value according to the prediction results. Based on the existing algorithms, the novel algorithm reduces the interference of random factors on the estimation results and improves the accuracy by adding the steps of cardinality estimation sampling, artificial neural network training, and error prediction. The experimental results show that, using the cardinality estimation algorithm proposed in this paper, the average absolute deviation of cardinality estimation can be reduced by more than 20%.
APA, Harvard, Vancouver, ISO, and other styles
15

Qian, Chen, Hoilun Ngan, Yunhao Liu, and Lionel M. Ni. "Cardinality Estimation for Large-Scale RFID Systems." IEEE Transactions on Parallel and Distributed Systems 22, no. 9 (September 2011): 1441–54. http://dx.doi.org/10.1109/tpds.2011.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ré, Christopher, and D. Suciu. "Understanding cardinality estimation using entropy maximization." ACM Transactions on Database Systems 37, no. 1 (February 2012): 1–31. http://dx.doi.org/10.1145/2109196.2109202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sakr, Sherif. "Algebra‐based XQuery cardinality estimation." International Journal of Web Information Systems 4, no. 1 (April 4, 2008): 6–47. http://dx.doi.org/10.1108/17440080810865611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shah-Mansouri, Vahid, and Vincent W. S. Wong. "Cardinality Estimation in RFID Systems with Multiple Readers." IEEE Transactions on Wireless Communications 10, no. 5 (May 2011): 1458–69. http://dx.doi.org/10.1109/twc.2011.030411.100390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Varagnolo, Damiano, Gianluigi Pillonetto, and Luca Schenato. "Distributed Cardinality Estimation in Anonymous Networks." IEEE Transactions on Automatic Control 59, no. 3 (March 2014): 645–59. http://dx.doi.org/10.1109/tac.2013.2287113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Jie, Wenqian Dong, Qingqing Zhou, and Dong Li. "Fauce." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 1950–63. http://dx.doi.org/10.14778/3476249.3476254.

Full text
Abstract:
Cardinality estimation is a fundamental and critical problem in databases. Recently, many estimators based on deep learning have been proposed to solve this problem and they have achieved promising results. However, these estimators struggle to provide accurate results for complex queries, due to not capturing real inter-column and inter-table correlations. Furthermore, none of these estimators contain the uncertainty information about their estimations. In this paper, we present a join cardinality estimator called Fauce. Fauce learns the correlations across all columns and all tables in the database. It also contains the uncertainty information of each estimation. Among all studied learned estimators, our results are promising: (1) Fauce is a light-weight estimator, it has 10× faster inference speed than the state of the art estimator; (2) Fauce is robust to the complex queries, it provides 1.3×--6.7× smaller estimation errors for complex queries compared with the state of the art estimator; (3) To the best of our knowledge, Fauce is the first estimator that incorporates uncertainty information for cardinality estimation into a deep learning model.
APA, Harvard, Vancouver, ISO, and other styles
21

Potoniec, Jedrzej. "Mining Cardinality Restrictions in OWL." Foundations of Computing and Decision Sciences 45, no. 3 (September 1, 2020): 195–216. http://dx.doi.org/10.2478/fcds-2020-0011.

Full text
Abstract:
AbstractWe present an approach to mine cardinality restriction axioms from an existing knowledge graph, in order to extend an ontology describing the graph. We compare frequency estimation with kernel density estimation as approaches to obtain the cardinalities in restrictions. We also propose numerous strategies for filtering obtained axioms in order to make them more available for the ontology engineer. We report the results of experimental evaluation on DBpedia 2016-10 and show that using kernel density estimation to compute the cardinalities in cardinality restrictions yields more robust results that using frequency estimation. We also show that while filtering is of limited usability for minimum cardinality restrictions, it is much more important for maximum cardinality restrictions. The presented findings can be used to extend existing ontology engineering tools in order to support ontology construction and enable more efficient creation of knowledge-intensive artificial intelligence systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Ngo, Hung Q. "RECENT RESULTS ON CARDINALITY ESTIMATION AND INFORMATION THEORETIC INEQUALITIES." Journal of Computer Science and Cybernetics 37, no. 3 (October 7, 2021): 223–38. http://dx.doi.org/10.15625/1813-9663/37/3/16129.

Full text
Abstract:
I would like to dedicate this little exposition to Prof. Phan Dinh Dieu, one of the giants and pioneers of Mathematics in Computer Science in Vietnam. In the past 15 years or so, new and exciting connections between fundamental problems in database theory and information theory have emerged. There are several angles one can take to describe this connection. This paper takes one such angle, influenced by the author's own bias and research results. In particular, we will describe how the cardinality estimation problem -- a corner-stone problem for query optimizers -- is deeply connected to information theoretic inequalities. Furthermore, we explain how inequalities can also be used to derive a couple of classic geometric inequalities such as the Loomis-Whitney inequality. A purpose of the article is to introduce the reader to these new connections, where theory and practice meet in a wonderful way. Another objective is to point the reader to a research area with many new open questions.
APA, Harvard, Vancouver, ISO, and other styles
23

Zheng, Yuanqing, and Mo Li. "Towards More Efficient Cardinality Estimation for Large-Scale RFID Systems." IEEE/ACM Transactions on Networking 22, no. 6 (December 2014): 1886–96. http://dx.doi.org/10.1109/tnet.2013.2288352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hou, Yuxiao, Jiajue Ou, Yuanqing Zheng, and Mo Li. "PLACE: Physical Layer Cardinality Estimation for Large-Scale RFID Systems." IEEE/ACM Transactions on Networking 24, no. 5 (October 2016): 2702–14. http://dx.doi.org/10.1109/tnet.2015.2481999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Park, Jonghoon, Cheoleun Moon, Ikjun Yeom, and Yusung Kim. "Cardinality estimation using collective interference for large-scale RFID systems." Journal of Network and Computer Applications 83 (April 2017): 101–10. http://dx.doi.org/10.1016/j.jnca.2017.01.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Woltmann, Lucas, Claudio Hartmann, Dirk Habich, and Wolfgang Lehner. "Aggregate-based Training Phase for ML-based Cardinality Estimation." Datenbank-Spektrum 22, no. 1 (January 10, 2022): 45–57. http://dx.doi.org/10.1007/s13222-021-00400-z.

Full text
Abstract:
AbstractCardinality estimation is a fundamental task in database query processing and optimization. As shown in recent papers, machine learning (ML)-based approaches may deliver more accurate cardinality estimations than traditional approaches. However, a lot of training queries have to be executed during the model training phase to learn a data-dependent ML model making it very time-consuming. Many of those training or example queries use the same base data, have the same query structure, and only differ in their selective predicates. To speed up the model training phase, our core idea is to determine a predicate-independent pre-aggregation of the base data and to execute the example queries over this pre-aggregated data. Based on this idea, we present a specific aggregate-based training phase for ML-based cardinality estimation approaches in this paper. As we are going to show with different workloads in our evaluation, we are able to achieve an average speedup of 90 with our aggregate-based training phase and thus outperform indexes.
APA, Harvard, Vancouver, ISO, and other styles
27

Shironoshita, E. Patrick, Michael T. Ryan, and Mansur R. Kabuka. "Cardinality estimation for the optimization of queries on ontologies." ACM SIGMOD Record 36, no. 2 (June 2007): 13–18. http://dx.doi.org/10.1145/1328854.1328856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Papapetrou, Odysseas, Wolf Siberski, and Wolfgang Nejdl. "Cardinality estimation and dynamic length adaptation for Bloom filters." Distributed and Parallel Databases 28, no. 2-3 (September 2, 2010): 119–56. http://dx.doi.org/10.1007/s10619-010-7067-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Borovica-Gajic, Renata, Stratos Idreos, Anastasia Ailamaki, Marcin Zukowski, and Campbell Fraser. "Smooth Scan: robust access path selection without cardinality estimation." VLDB Journal 27, no. 4 (May 29, 2018): 521–45. http://dx.doi.org/10.1007/s00778-018-0507-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Oommen, B. J., and M. Thiyagarajah. "Benchmarking attribute cardinality maps for database systems using the tpc-d specifications." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 33, no. 6 (December 2003): 913–24. http://dx.doi.org/10.1109/tsmcb.2003.810909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Guoliang, Xuanhe Zhou, and Lei Cao. "Machine learning for databases." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3190–93. http://dx.doi.org/10.14778/3476311.3476405.

Full text
Abstract:
Machine learning techniques have been proposed to optimize the databases. For example, traditional empirical database optimization techniques (e.g., cost estimation, join order selection, knob tuning, index and view advisor) cannot meet the high-performance requirement for large-scale database instances, various applications and diversified users, especially on the cloud. Fortunately, machine learning based techniques can alleviate this problem by judiciously selecting optimization strategy. In this tutorial, we categorize database tasks into three typical problems that can be optimized by different machine learning models, including NP-hard problems (e.g., knob space exploration, index/view selection, partition-key recommendation for offline optimization; query rewrite, join order selection for online optimization), regression problems (e.g., cost/cardinality estimation, index/view benefit estimation, query latency prediction), and prediction problems (e.g., query workload prediction). We review existing machine learning based techniques to address these problems and provide research challenges.
APA, Harvard, Vancouver, ISO, and other styles
32

Gong, Wei, Jiangchuan Liu, Kebin Liu, and Yunhao Liu. "Toward More Rigorous and Practical Cardinality Estimation for Large-Scale RFID Systems." IEEE/ACM Transactions on Networking 25, no. 3 (June 2017): 1347–58. http://dx.doi.org/10.1109/tnet.2016.2634551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mannino, Michael V., Paicheng Chu, and Thomas Sager. "Statistical profile estimation in database systems." ACM Computing Surveys 20, no. 3 (September 1988): 191–221. http://dx.doi.org/10.1145/62061.62063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lefons, Ezio, Antonio Merico, and Filippo Tangorra. "Analytical profile estimation in database systems." Information Systems 20, no. 1 (March 1995): 1–20. http://dx.doi.org/10.1016/0306-4379(95)00001-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Ling, Heqing Huang, and Donghui Chen. "Join cardinality estimation by combining operator-level deep neural networks." Information Sciences 546 (February 2021): 1047–62. http://dx.doi.org/10.1016/j.ins.2020.09.065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tian, Ruijie, Weishi Zhang, Fei Wang, Jingchun Zhou, Adi Alhudhaif, and Fayadh Alenezi. "Cardinality estimation of activity trajectory similarity queries using deep learning." Information Sciences 646 (October 2023): 119398. http://dx.doi.org/10.1016/j.ins.2023.119398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

PILARSKI, DANIEL. "LINGUISTIC SUMMARIZATION OF DATABASES WITH QUANTIRIUS: A REDUCTION ALGORITHM FOR GENERATED SUMMARIES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 18, no. 03 (June 2010): 305–31. http://dx.doi.org/10.1142/s0218488510006556.

Full text
Abstract:
This paper deals with the issue of linguistic database summaries understood as linguistically quantified propositions. We show basic capabilities of Quantirius, our interactive system supporting the mining and the assessment of linguistic summaries in a database. The mining of linguistic summaries is realized by means of the concept of a protoform introduced by Zadeh. The assessment of validity degrees of summaries is done via Zadeh's fuzzy logic based calculus, extended by the use of cardinality patterns and triangular norms. The main part of the paper presents an idea of a further processing of the set of generated summaries. The proposed algorithm is composed of a reduction mechanism of summaries based on linguistic terms inclusion and a reduction of summaries by means of the overlapping unimodal linguistic terms. We then present an idea of the approval threshold computing for the truth degrees of summaries. Finally, we show that summaries generated by different protoforms can be joined in a master-detail-like relation, making an interesting structure of information contained in the database.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Zichen, Yi-Cheng Tu, and Xiaorui Wang. "Online Energy Estimation of Relational Operations in Database Systems." IEEE Transactions on Computers 64, no. 11 (November 1, 2015): 3223–36. http://dx.doi.org/10.1109/tc.2015.2394309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tomov, N., E. Dempster, M. H. Williams, A. Burger, H. Taylor, P. J. B. King, and P. Broughton. "Analytical response time estimation in parallel relational database systems." Parallel Computing 30, no. 2 (February 2004): 249–83. http://dx.doi.org/10.1016/j.parco.2003.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ciaccia, Paolo, and Dario Maio. "Access cost estimation for physical database design." Data & Knowledge Engineering 11, no. 2 (October 1993): 125–50. http://dx.doi.org/10.1016/0169-023x(93)90002-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Slavov, Vasil, and Praveen Rao. "A gossip-based approach for Internet-scale cardinality estimation of XPath queries over distributed semistructured data." VLDB Journal 23, no. 1 (May 17, 2013): 51–76. http://dx.doi.org/10.1007/s00778-013-0314-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nguyen, Chuyen T., Van-Dinh Nguyen, and Anh T. Pham. "Tag Cardinality Estimation Using Expectation-Maximization in ALOHA-Based RFID Systems With Capture Effect and Detection Error." IEEE Wireless Communications Letters 8, no. 2 (April 2019): 636–39. http://dx.doi.org/10.1109/lwc.2018.2890650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kadam, Sachin, Chaitanya S. Raut, Aman Deep Meena, and Gaurav S. Kasbekar. "Fast node cardinality estimation and cognitive MAC protocol design for heterogeneous machine-to-machine networks." Wireless Networks 26, no. 6 (March 18, 2020): 3929–52. http://dx.doi.org/10.1007/s11276-020-02291-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Boulos, Jihad, and Kinji Ono. "Cost estimation of user-defined methods in object-relational database systems." ACM SIGMOD Record 28, no. 3 (September 1999): 22–28. http://dx.doi.org/10.1145/333607.333610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Carmeli, Nofar, and Markus Kröll. "On the Enumeration Complexity of Unions of Conjunctive Queries." ACM Transactions on Database Systems 46, no. 2 (June 2021): 1–41. http://dx.doi.org/10.1145/3450263.

Full text
Abstract:
We study the enumeration complexity of Unions of Conjunctive Queries (UCQs) . We aim to identify the UCQs that are tractable in the sense that the answer tuples can be enumerated with a linear preprocessing phase and a constant delay between every successive tuples. It has been established that, in the absence of self-joins and under conventional complexity assumptions, the CQs that admit such an evaluation are precisely the free-connex ones. A union of tractable CQs is always tractable. We generalize the notion of free-connexity from CQs to UCQs, thus showing that some unions containing intractable CQs are, in fact, tractable. Interestingly, some unions consisting of only intractable CQs are tractable too. We show how to use the techniques presented in this article also in settings where the database contains cardinality dependencies (including functional dependencies and key constraints) or when the UCQs contain disequalities. The question of finding a full characterization of the tractability of UCQs remains open. Nevertheless, we prove that, for several classes of queries, free-connexity fully captures the tractable UCQs.
APA, Harvard, Vancouver, ISO, and other styles
46

Ahluwalia, Rashpal. "Instructional Software for Reliability Estimation and Fault Tree Analysis." Industrial and Systems Engineering Review 1, no. 2 (November 1, 2013): 83–109. http://dx.doi.org/10.37266/iser.2013v1i2.pp83-109.

Full text
Abstract:
This paper describes a software tool to introduce fundamental concepts of reliability and fault tree analysis to engineering students. Students can fit common failure distributions to failure data. The data can be complete, singly censored, or multiply censored. The software computes distribution and goodness-of-fit parameters. The students can use the tool to validate hand calculations. Failure distributions and reliability values for various components can be identified and stored in a database. Various components and sub-systems can be used to build series- parallel or complex systems. The components data can also be used to build fault trees. The software tool can compute reliability of complex state independent and state dependent systems. The tool can also be used to compute failure probability of the top node of a fault tree. The software was implemented in Visual Basic with SQL as the database. It operates on the Windows 7 platform.
APA, Harvard, Vancouver, ISO, and other styles
47

Beber, Moritz E., Mattia G. Gollub, Dana Mozaffari, Kevin M. Shebek, Avi I. Flamholz, Ron Milo, and Elad Noor. "eQuilibrator 3.0: a database solution for thermodynamic constant estimation." Nucleic Acids Research 50, no. D1 (November 29, 2021): D603—D609. http://dx.doi.org/10.1093/nar/gkab1106.

Full text
Abstract:
Abstract eQuilibrator (equilibrator.weizmann.ac.il) is a database of biochemical equilibrium constants and Gibbs free energies, originally designed as a web-based interface. While the website now counts around 1,000 distinct monthly users, its design could not accommodate larger compound databases and it lacked a scalable Application Programming Interface (API) for integration into other tools developed by the systems biology community. Here, we report on the recent updates to the database as well as the addition of a new Python-based interface to eQuilibrator that adds many new features such as a 100-fold larger compound database, the ability to add novel compounds, improvements in speed and memory use, and correction for Mg2+ ion concentrations. Moreover, the new interface can compute the covariance matrix of the uncertainty between estimates, for which we show the advantages and describe the application in metabolic modelling. We foresee that these improvements will make thermodynamic modelling more accessible and facilitate the integration of eQuilibrator into other software platforms.
APA, Harvard, Vancouver, ISO, and other styles
48

Ergaliev, E. K., M. N. Madiyarov, N. M. Oskorbin, and L. L. Smolyakova. "Database Reconciliation in Applied Interval Analysis." Izvestiya of Altai State University, no. 1(123) (March 18, 2022): 89–94. http://dx.doi.org/10.14258/izvasu(2022)1-14.

Full text
Abstract:
The article deals with the problem of the reconciliation of observation results, which arises when solving problems of interval analysis of a database. It is found that the values of the set of input variables and the output variable are consistent if the graph of the desired dependence is located at the inner points of the interval hyper-rectangle in each observation. In this case, it is proposed to use special solutions of interval systems of linear algebraic equations (ISLAU) to analyze the data of linear processes. However, in real and model conditions, the specified property of the database is not always fulfilled a priori. In these cases, it is proposed to use the principle of robust estimation: inconsistent observations should either be excluded from the sample or adjusted. This paper presents the results of the study of these methods of matching the used experimental database on model linear processes under conditions when the basic assumptions of interval estimation of dependencies are fulfilled. In addition, variant computational experiments have been investigated to reveal the possibility of increasing the accuracy of interval analysis due to preliminary correction of observations, including the possibility of guaranteed estimation of the sought dependences.
APA, Harvard, Vancouver, ISO, and other styles
49

Rapczyński, Michał, Philipp Werner, Sebastian Handrich, and Ayoub Al-Hamadi. "A Baseline for Cross-Database 3D Human Pose Estimation." Sensors 21, no. 11 (May 28, 2021): 3769. http://dx.doi.org/10.3390/s21113769.

Full text
Abstract:
Vision-based 3D human pose estimation approaches are typically evaluated on datasets that are limited in diversity regarding many factors, e.g., subjects, poses, cameras, and lighting. However, for real-life applications, it would be desirable to create systems that work under arbitrary conditions (“in-the-wild”). To advance towards this goal, we investigated the commonly used datasets HumanEva-I, Human3.6M, and Panoptic Studio, discussed their biases (that is, their limitations in diversity), and illustrated them in cross-database experiments (for which we used a surrogate for roughly estimating in-the-wild performance). For this purpose, we first harmonized the differing skeleton joint definitions of the datasets, reducing the biases and systematic test errors in cross-database experiments. We further proposed a scale normalization method that significantly improved generalization across camera viewpoints, subjects, and datasets. In additional experiments, we investigated the effect of using more or less cameras, training with multiple datasets, applying a proposed anatomy-based pose validation step, and using OpenPose as the basis for the 3D pose estimation. The experimental results showed the usefulness of the joint harmonization, of the scale normalization, and of augmenting virtual cameras to significantly improve cross-database and in-database generalization. At the same time, the experiments showed that there were dataset biases that could not be compensated and call for new datasets covering more diversity. We discussed our results and promising directions for future work.
APA, Harvard, Vancouver, ISO, and other styles
50

Sainati, Tristano, Fiona Zakaria, Giorgio Locatelli, P. Andrew Sleigh, and Barbara Evans. "Understanding the costs of urban sanitation: towards a standard costing model." Journal of Water, Sanitation and Hygiene for Development 10, no. 4 (October 27, 2020): 642–58. http://dx.doi.org/10.2166/washdev.2020.093.

Full text
Abstract:
Abstract There is a dearth of reliable cost data for urban sanitation. In the absence of high-quality global data, the full cost of sustainable implementation of urban sanitation remains uncertain. This paper proposes an approach for developing bespoke parametric cost estimation models for easy and reliable estimation of the costs of alternative sanitation technologies in a range of geographical contexts. A key requirement for the development of these models is the establishment of a large database of empirical information on the current costs of sanitation systems. Such a database does not currently exist. Two foundational tools are proposed. Firstly, a standard metric for reporting the costs of urban sanitation systems, total annualised cost per household. Secondly, a standardised approach to the collection of empirical cost data, the Novel Ball-Park Reporting Approach (NBPRA). Data from the NBPRA are presented for 87 individual sanitation components from 25 cities in 10 countries. Broad cost ranges for different archetypal systems have been estimated; these currently have high levels of uncertainty. Further work is proposed to collect additional data, build up the global database, and develop parametric cost estimation models with higher reliability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography