Gotowa bibliografia na temat „Multi-Objective Query Optimization”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-Objective Query Optimization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Multi-Objective Query Optimization"

1

Trummer, Immanuel, i Christoph Koch. "Multi-objective parametric query optimization". Communications of the ACM 60, nr 10 (25.09.2017): 81–89. http://dx.doi.org/10.1145/3068612.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Trummer, Immanuel, i Christoph Koch. "Multi-objective parametric query optimization". Proceedings of the VLDB Endowment 8, nr 3 (listopad 2014): 221–32. http://dx.doi.org/10.14778/2735508.2735512.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Trummer, Immanuel, i Christoph Koch. "Multi-Objective Parametric Query Optimization". ACM SIGMOD Record 45, nr 1 (2.06.2016): 24–31. http://dx.doi.org/10.1145/2949741.2949748.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Trummer, Immanuel, i Christoph Koch. "Multi-objective parametric query optimization". VLDB Journal 26, nr 1 (18.08.2016): 107–24. http://dx.doi.org/10.1007/s00778-016-0439-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Chenxiao, Zach Arani, Le Gruenwald, Laurent d'Orazio i Eleazar Leal. "Re-optimization for Multi-objective Cloud Database Query Processing using Machine Learning". International Journal of Database Management Systems 13, nr 1 (28.02.2021): 21–40. http://dx.doi.org/10.5121/ijdms.2021.13102.

Pełny tekst źródła
Streszczenie:
In cloud environments, hardware configurations, data usage, and workload allocations are continuously changing. These changes make it difficult for the query optimizer of a cloud database management system (DBMS) to select an optimal query execution plan (QEP). In order to optimize a query with a more accurate cost estimation, performing query re-optimizations during the query execution has been proposed in the literature. However, some of there-optimizations may not provide any performance gain in terms of query response time or monetary costs, which are the two optimization objectives for cloud databases, and may also have negative impacts on the performance due to their overheads. This raises the question of how to determine when are-optimization is beneficial. In this paper, we present a technique called ReOptML that uses machine learning to enable effective re-optimizations. This technique executes a query in stages, employs a machine learning model to predict whether a query re-optimization is beneficial after a stage is executed, and invokes the query optimizer to perform the re-optimization automatically. The experiments comparing ReOptML with existing query re-optimization algorithms show that ReOptML improves query response time from 13% to 35% for skew data and from 13% to 21% for uniform data, and improves monetary cost paid to cloud service providers from 17% to 35% on skewdata.
Style APA, Harvard, Vancouver, ISO itp.
6

Bansal, Rohit, Deepak Kumar i Sushil Kumar. "Multi-objective Multi-Join Query Optimization using Modified Grey Wolf Optimization". International Journal of Advanced Intelligence Paradigms 17, nr 1/2 (2020): 1. http://dx.doi.org/10.1504/ijaip.2020.10019251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kumar, Deepak, Deepti Mehrotra i Rohit Bansal. "Query Optimization in Crowd-Sourcing Using Multi-Objective Ant Lion Optimizer". International Journal of Information Technology and Web Engineering 14, nr 4 (październik 2019): 50–63. http://dx.doi.org/10.4018/ijitwe.2019100103.

Pełny tekst źródła
Streszczenie:
Nowadays, query optimization is a biggest concern for crowd-sourcing systems, which are developed for relieving the user burden of dealing with the crowd. Initially, a user needs to submit a structured query language (SQL) based query and the system takes the responsibility of query compiling, generating an execution plan, and evaluating the crowd-sourcing market place. The input queries have several alternative execution plans and the difference in crowd-sourcing cost between the worst and best plans. In relational database systems, query optimization is essential for crowd-sourcing systems, which provides declarative query interfaces. Here, a multi-objective query optimization approach using an ant-lion optimizer was employed for declarative crowd-sourcing systems. It generates a query plan for developing a better balance between the latency and cost. The experimental outcome of the proposed methodology was validated on UCI automobile and Amazon Mechanical Turk (AMT) datasets. The proposed methodology saves 30%-40% of cost in crowd-sourcing query optimization compared to the existing methods.
Style APA, Harvard, Vancouver, ISO itp.
8

Kumar, Akshay, i T. V. Vijay Kumar. "A Multi-Objective Approach to Big Data View Materialization". International Journal of Knowledge and Systems Science 12, nr 2 (kwiecień 2021): 17–37. http://dx.doi.org/10.4018/ijkss.2021040102.

Pełny tekst źródła
Streszczenie:
Big data comprises voluminous and heterogeneous data that has a limited level of trustworthiness. This data is used to generate valuable information that can be used for decision making. However, decision making queries on Big data consume a lot of time for processing resulting in higher response times. For effective and efficient decision making, this response time needs to be reduced. View materialization has been used successfully to reduce the query response time in the context of a data warehouse. Selection of such views is a complex problem vis-à-vis Big data and is the focus of this paper. In this paper, the Big data view selection problem is formulated as a bi-objective optimization problem with the two objectives being the minimization of the query evaluation cost and the minimization of the update processing cost. Accordingly, a Big data view selection algorithm that selects Big data views for a given query workload, using the vector evaluated genetic algorithm, is proposed. The proposed algorithm aims to generate views that are able to reduce the response time of decision-making queries.
Style APA, Harvard, Vancouver, ISO itp.
9

Kumar, Akshay, i T. V. Vijay Kumar. "Multi-Objective Big Data View Materialization Using NSGA-III". International Journal of Decision Support System Technology 14, nr 1 (1.01.2022): 1–28. http://dx.doi.org/10.4018/ijdsst.311066.

Pełny tekst źródła
Streszczenie:
Present day applications process large amount of data that is being produced at brisk rate and is heterogeneous with levels of trustworthiness. This Big data largely consists of semi-structured and unstructured data, which needs to be processed in admissible time so that timely decisions are taken that benefit the organization and society. Such real time processing would require Big data view materialization that would enable faster and timely processing of decision making queries. Several algorithms exist for Big data view materialization. These algorithms aim to select Big data views that minimize the total query processing cost for the query workload. In literature, this problem has been articulated as a bi-objective optimization problem, which minimizes the query evaluation cost along with the update processing cost. This paper proposes to adapt the reference point based non-dominated sorting genetic algorithm, to design an NSGA-III based Big data view selection algorithm (BDVSANSGA-III) to address this bi-objective Big data view selection problem. Experimental results revealed that the proposed BDVSANSGA-III was able to compute diverse non-dominated Big data views and performed better than the existing algorithms..
Style APA, Harvard, Vancouver, ISO itp.
10

Sanchez-Gomez, Jesus M., Miguel A. Vega-Rodríguez i Carlos J. Pérez. "Sentiment-oriented query-focused text summarization addressed with a multi-objective optimization approach". Applied Soft Computing 113 (grudzień 2021): 107915. http://dx.doi.org/10.1016/j.asoc.2021.107915.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Multi-Objective Query Optimization"

1

Garg, Vishesh. "Towards Designing PCM-Conscious Database Systems". Thesis, 2016. https://etd.iisc.ac.in/handle/2005/4889.

Pełny tekst źródła
Streszczenie:
Phase Change Memory (PCM) is a recently developed non-volatile memory technology that is expected to provide an attractive combination of the best features of conventional disks (persistence, capacity) and of DRAM (access speed). For instance, it is about 2 to 4 times denser than DRAM, while providing a DRAM-comparable read latency. On the other hand, it consumes much less energy than magnetic hard disks while providing substantively smaller write latency. Due to this suite of desirable features, PCM technology is expected to play a prominent role in the next generation of computing systems, either augmenting or replacing current components in the memory hierarchy. A limitation of PCM, however, is that there is a significant difference between the read and write behaviors in terms of energy, latency and bandwidth. A PCM write, for example, consumes 6 times more energy than a read. Further, PCM has limited write endurance since a memory cell becomes unusable after the number of writes to the cell exceeds a threshold determined by the underlying glass material. Database systems, by virtue of dealing with enormous amounts of data, are expected to be a prime beneficiary of this new technology. Accordingly, recent research has investigated how database engines may be redesigned to suit DBMS deployments on PCM, covering areas such as indexing techniques, logging mechanisms and query processing algorithms. Prior database research has primarily focused on computing architectures wherein either a) PCM completely replaces the DRAM memory ; or b) PCM and DRAM co-exist side-by-side and are independently controlled by the software. However, a third option that is gaining favor in the architecture community is where the PCM is augmented with a small hardware-managed DRAM buffer. In this model, which we refer to as DRAM HARD, the address space of the application maps to PCM, and the DRAM buffer can simply be visualized as yet another level of the existing cache hierarchy. With most of the query processing research being preoccupied with the first two models, this third model has remained largely ignored. Moreover, even in this limited literature, the emphasis has been restricted to exploring execution-time strategies; the compile-time plan selection process itself being left unaltered. In this thesis, we propose minimalist reworkings of current implementations of database operators, that are tuned to the DRAM HARD model, to make them PCM-conscious. We also propose novel algorithms for compile-time query plan selection, thereby taking a holistic approach to introducing PCM-compliance in present-day database systems. Specifically, our contributions are two-fold, as outlined below. First, we address the pragmatic goal of minimally altering current implementations of database operators to make them PCM-conscious, the objective being to facilitate an easy transition to the new technology. Specifically, we target the implementations of the \workhorse" database operators: sort, hash join and group-by. Our customized algorithms and techniques for each of these operators are designed to significantly reduce the number of writes while simultaneously saving on execution times. For instance, in the case of sort operator, we perform an in-place partitioning of input data into DRAM-sized chunks so that the subsequent sorting of these chunks can finish inside the DRAM, consequently avoiding both intermediate writes and their associated latency overheads. Second, we redesign the query optimizer to suit the new environment of PCM. Each of the new operator implementations is accompanied by simple but effective write estimators that make these implementations suitable for incorporation in the optimizer. Current optimizers typically choose plans using a latency-based costing mechanism assigning equal costs to both read and write memory operations. The asymmetric read-write nature of PCM implies that these models are no longer accurate. We therefore revise the cost models to make them cognizant of this asymmetry by accounting for the additional latency during writes. Moreover, since the number of writes is critical to the lifespan of a PCM device, a new metric of write cost is introduced in the optimizer plan selection process, with its value being determined using the above estimators. Consequently, the query optimizer needs to select plans that simultaneously minimize query writes and response times. We propose two solutions for handling this dual-objective optimization problem. The first approach is a heuristic propagation algorithm that extends the widely used dynamic programming plan propagation procedure to drastically reduce the exponential search space of candidate plans. The algorithm uses the write costs of sub-plans at each of the operator nodes to decide which of them can be selectively pruned from further consideration. The second approach maps this optimization problem to the linear multiple-choice knapsack problem, and uses its greedy solution to return the nal plan for execution. This plan is known to be optimal within the set of non interesting-order plans in a single join order search space. Moreover, it may contain a weighted execution of two algorithms for one of the operator nodes in the plan tree. Therefore overall, while the greedy algorithm comes with optimality guarantees, the heuristic approach is advantageous in terms of easier implementation. The experimentation for our proposed techniques is conducted on Multi2sim, a state-of the- art cycle-accurate simulator. Since it does not have native support for PCM, we made a major extension to its existing memory module to model PCM device. Specifically, we added separate data tracking functionality for the DRAM and PCM resident data, to implement the commonly used read-before-write technique for PCM writes reduction. Similarly, modifications were made to Multi2sim's timing subsystem to account for the asymmetric read-write latencies of PCM. A new DRAM replacement policy called N-Chance, that has been shown to work well for PCM-based hardware, was also introduced. Our new techniques are evaluated on end-to-end TPC-H benchmark queries with regard to the following metrics: number of writes, response times and wear distribution. The experimental results indicate that, in comparison to their PCM-oblivious counterparts, the PCM-conscious operators collectively reduce the number of writes by a factor of 2 to 3, while concurrently improving the query response times by about 20% to 30%. When combined with the appropriate plan choices, the improvements are even higher. In the case of Query 19, for instance, we obtained a 64% savings in writes, while the response time came down to two-thirds of the original. In essence, our algorithms provide both short-term and long-term benefits. These outcomes augur well for database engines that wish to leverage the impending transition to PCM-based computing.
Style APA, Harvard, Vancouver, ISO itp.
2

Sabih, Rafia. "Balancing Money and Time for OLAP Queries on Cloud Databases". Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2931.

Pełny tekst źródła
Streszczenie:
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.
Style APA, Harvard, Vancouver, ISO itp.
3

Sabih, Rafia. "Balancing Money and Time for OLAP Queries on Cloud Databases". Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2931.

Pełny tekst źródła
Streszczenie:
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Multi-Objective Query Optimization"

1

Yu, Qi, i Athman Bouguettaya. "Multi-objective Service Query Optimization". W Foundations for Efficient Web Service Selection, 61–85. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-1-4419-0314-3_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cecchini, Rocío L., Carlos M. Lorenzetti i Ana G. Maguitman. "Multi-objective Query Optimization Using Topic Ontologies". W Flexible Query Answering Systems, 145–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04957-6_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Vikram Singh. "Multi-objective Parametric Query Optimization for Distributed Database Systems". W Advances in Intelligent Systems and Computing, 219–33. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0448-3_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Chenxiao, Le Gruenwald, Laurent d’Orazio i Eleazar Leal. "Cloud Query Processing with Reinforcement Learning-Based Multi-objective Re-optimization". W Model and Data Engineering, 141–55. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78428-7_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Chenxiao, Le Gruenwald i Laurent d’Orazio. "SLA-Aware Cloud Query Processing with Reinforcement Learning-Based Multi-objective Re-optimization". W Big Data Analytics and Knowledge Discovery, 249–55. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12670-3_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Multi-Objective Query Optimization"

1

Yu, Hang, i Lester Litchfield. "Query Classification with Multi-objective Backoff Optimization". W SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3397271.3401320.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Georgoulakis Misegiannis, Michail, Vasiliki (Verena) Kantere i Laurent d'Orazio. "Multi-objective query optimization in Spark SQL". W IDEAS'22: International Database Engineered Applications Symposium. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3548785.3548800.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Konstantinidis, Andreas, Demetrios Zeinalipour-Yazti, Panayiotis Andreou i George Samaras. "Multi-objective Query Optimization in Smartphone Social Networks". W 2011 12th IEEE International Conference on Mobile Data Management (MDM). IEEE, 2011. http://dx.doi.org/10.1109/mdm.2011.37.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rituraj, Rituraj, i Annamaria R. Varkonyi Koczy. "Advantages of Anytime Algorithm for Multi-Objective Query Optimization". W 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI). IEEE, 2020. http://dx.doi.org/10.1109/sami48414.2020.9108713.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Trummer, Immanuel, i Christoph Koch. "A Fast Randomized Algorithm for Multi-Objective Query Optimization". W SIGMOD/PODS'16: International Conference on Management of Data. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2882903.2882927.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Trummer, Immanuel, i Christoph Koch. "An Incremental Anytime Algorithm for Multi-Objective Query Optimization". W SIGMOD/PODS'15: International Conference on Management of Data. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2723372.2746484.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Ting. "Mass Data Query Optimization Based on Multi-objective Co-evolutionary Algorithm". W 2017 2nd International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2017). Paris, France: Atlantis Press, 2017. http://dx.doi.org/10.2991/amcce-17.2017.168.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhao, Hang, Qinghua Deng, Wenting Huang i Zhenping Feng. "Thermodynamic and Economic Analysis and Multi-Objective Optimization of Supercritical CO2 Brayton Cycles". W ASME Turbo Expo 2015: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/gt2015-42631.

Pełny tekst źródła
Streszczenie:
Supercritical CO2 Brayton cycles (SCO2BC) offer the potential of better economy and higher practicability due to their high power conversion efficiency, moderate turbine inlet temperature, compact size as compared with some traditional working fluids cycles. In this paper, the SCO2BC including the SCO2 single-recuperated Brayton cycle (RBC) and recompression recuperated Brayton cycle (RRBC) are considered, and flexible thermodynamic and economic modeling methodologies are presented. The influences of the key cycle parameters on thermodynamic performance of SCO2BC are studied, and the comparative analyses on RBC and RRBC are conducted. Based on the thermodynamic and economic models and the given conditions, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) is used for the Pareto-based multi-objective optimization of the RRBC, with the maximum exergy efficiency and the lowest cost per power ($/kW) as its objectives. In addition, the Artificial Neural Network (ANN) is chosen to establish the relationship between the input, output, and the key cycle parameters, which could accelerate the parameters query process. It is observed in the thermodynamic analysis process that the cycle parameters such as heat source temperature, turbine inlet temperature, cycle pressure ratio, and pinch temperature difference of heat exchangers have significant effects on the cycle exergy efficiency. And the exergy destruction of heat exchanger is the main reason why the exergy efficiency of RRBC is higher than that of RBC under the same cycle conditions. Compared with the two kinds of SCO2BC, RBC has a cost advantage from economic perspective, while RRBC has a much better thermodynamic performance, and could rectify the temperature pinching problem that exists in RBC. Therefore, RRBC is recommended in this paper. Furthermore, the Pareto front curve between the cycle cost/ cycle power (CWR) and the cycle exergy efficiency is obtained by multi-objective optimization, which indicates that there is a conflicting relation between them. The optimization results could provide an optimum trade-off curve enabling cycle designers to choose their desired combination between the efficiency and cost. Moreover, the optimum thermodynamic parameters of RRBC can be predicted with good accuracy using ANN, which could help the users to find the SCO2BC parameters fast and accurately.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii