Gotowa bibliografia na temat „Hash Join Operator”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Hash Join Operator”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Hash Join Operator"

1

Oguz, Damla, Shaoyi Yin, Belgin Ergenç, Abdelkader Hameurlain i Oguz Dikenelli. "Extended Adaptive Join Operator with Bind-Bloom Join for Federated SPARQL Queries". International Journal of Data Warehousing and Mining 13, nr 3 (lipiec 2017): 47–72. http://dx.doi.org/10.4018/ijdwm.2017070103.

Pełny tekst źródła
Streszczenie:
The goal of query optimization in query federation over linked data is to minimize the response time and the completion time. Communication time has the highest impact on them both. Static query optimization can end up with inefficient execution plans due to unpredictable data arrival rates and missing statistics. This study is an extension of adaptive join operator which always begins with symmetric hash join to minimize the response time, and can change the join method to bind join to minimize the completion time. The authors extend adaptive join operator with bind-bloom join to further reduce the communication time and, consequently, to minimize the completion time. They compare the new operator with symmetric hash join, bind join, bind-bloom join, and adaptive join operator with respect to the response time and the completion time. Performance evaluation shows that the extended operator provides optimal response time and further reduces the completion time. Moreover, it has the adaptation ability to different data arrival rates.
Style APA, Harvard, Vancouver, ISO itp.
2

Romanous, Bashar, Skyler Windh, Ildar Absalyamov, Prerna Budhkar, Robert Halstead, Walid Najjar i Vassilis Tsotras. "Efficient local locking for massively multithreaded in-memory hash-based operators". VLDB Journal 30, nr 3 (11.02.2021): 333–59. http://dx.doi.org/10.1007/s00778-020-00642-5.

Pełny tekst źródła
Streszczenie:
AbstractThe join and group-by aggregation are two memory intensive operators that are affecting the performance of relational databases. Hashing is a common approach used to implement both operators. Recent paradigm shifts in multi-core processor architectures have reinvigorated research into how the join and group-by aggregation operators can leverage these advances. However, the poor spatial locality of the hashing approach has hindered performance on multi-core processor architectures which rely on using large cache hierarchies for latency mitigation. Multithreaded architectures can better cope with poor spatial locality by masking memory latency with many outstanding requests. Nevertheless, the number of parallel threads, even in the most advanced multithreaded processors, such as UltraSPARC, is not enough to fully cover the main memory access latency. In this paper, we explore the hardware re-configurability of FPGAs to enable deeper execution pipelines that maintain hundreds (instead of tens) of outstanding memory requests across four FPGAs-drastically increasing concurrency and throughput. We present two end-to-end in-memory accelerators for the join and group-by aggregation operators using FPGAs. Both accelerators use massive multithreading to mask long memory delays of traversing linked-list data structures, while concurrently managing hundreds of thread states across four FPGAs locally. We explore how content addressable memories can be intermixed within our multithreaded designs to act as a synchronizing cache, which enforces locks and merges jobs together before they are written to memory. Throughput results for our hash-join operator accelerator show a speedup between 2$$\times $$ × and 3.4$$\times $$ × over the best multi-core approaches with comparable memory bandwidths on uniform and skewed datasets. The accelerator for the hash-based group-by aggregation operator demonstrates that leveraging CAMs achieves average speedup of 3.3$$\times $$ × with a best case of 9.4$$\times $$ × in terms of throughput over CPU implementations across five types of data distributions.
Style APA, Harvard, Vancouver, ISO itp.
3

Sabek, Ibrahim, i Tim Kraska. "The Case for Learned In-Memory Joins". Proceedings of the VLDB Endowment 16, nr 7 (marzec 2023): 1749–62. http://dx.doi.org/10.14778/3587136.3587148.

Pełny tekst źródła
Streszczenie:
In-memory join is an essential operator in any database engine. It has been extensively investigated in the database literature. In this paper, we study whether exploiting the CDF-based learned models to boost the join performance is practical. To the best of our knowledge, we are the first to fill this gap. We investigate the usage of CDF-based models and learned indexes (e.g., Recursive Model Index (RMI) and RadixSpline) in the three join categories; indexed nested loop join (INLJ), sort-based joins (SJ) and hash-based joins (HJ). Our study shows that there is room to improve the performance of the three join categories through our proposed optimized learned variants. Our experimental analysis showed that these optimized learned variants outperform the state-of-the-art techniques in many scenarios and with different datasets.
Style APA, Harvard, Vancouver, ISO itp.
4

Jahangiri, Shiva, Michael J. Carey i Johann-Christoph Freytag. "Design trade-offs for a robust dynamic hybrid hash join". Proceedings of the VLDB Endowment 15, nr 10 (czerwiec 2022): 2257–69. http://dx.doi.org/10.14778/3547305.3547327.

Pełny tekst źródła
Streszczenie:
Hybrid Hash Join (HHJ) has proven to be one of the most efficient and widely-used join algorithms. While HHJ's performance depends largely on accurate statistics and information about the input relations, it may not always be practical or possible for a system to have such information available. HHJ's design depends on many details to perform well. This paper is an experimental and analytical study of the trade-offs in designing a robust and dynamic HHJ operator. We revisit the design and optimization techniques suggested by previous studies through extensive experiments, comparing them with other algorithms designed by us or used in related studies. We explore the impact of the number of partitions on HHJ's performance and propose a new lower bound for the number of partitions. We design and evaluate different partition insertion techniques to maximize memory utilization with the least CPU cost. Additionally, we consider a comprehensive set of algorithms for dynamically selecting a partition to spill and compare the results against previously published studies. We then present and evaluate two alternative growth policies for spilled partitions. These algorithms have been implemented in the context of Apache AsterixDB and evaluated under different scenarios such as variable record sizes, different distributions of join attributes, and different storage types, including HDD, SSD, and Amazon Elastic Block Store (Amazon EBS).
Style APA, Harvard, Vancouver, ISO itp.
5

Broneske, David, Anna Drewes, Bala Gurumurthy, Imad Hajjar, Thilo Pionteck i Gunter Saake. "In-Depth Analysis of OLAP Query Performance on Heterogeneous Hardware". Datenbank-Spektrum 21, nr 2 (lipiec 2021): 133–43. http://dx.doi.org/10.1007/s13222-021-00384-w.

Pełny tekst źródła
Streszczenie:
AbstractClassical database systems are now facing the challenge of processing high-volume data feeds at unprecedented rates as efficiently as possible while also minimizing power consumption. Since CPU-only machines hit their limits, co-processors like GPUs and FPGAs are investigated by database system designers for their distinct capabilities. As a result, database systems over heterogeneous processing architectures are on the rise. In order to better understand their potentials and limitations, in-depth performance analyses are vital. This paper provides interesting performance data by benchmarking a portable operator set for column-based systems on CPU, GPU, and FPGA – all available processing devices within the same system. We consider TPC‑H query Q6 and additionally a hash join to profile the execution across the systems. We show that system memory access and/or buffer management remains the main bottleneck for device integration, and that architecture-specific execution engines and operators offer significantly higher performance.
Style APA, Harvard, Vancouver, ISO itp.
6

Markl, Volker. "Making Learned Query Optimization Practical". ACM SIGMOD Record 51, nr 1 (31.05.2022): 5. http://dx.doi.org/10.1145/3542700.3542702.

Pełny tekst źródła
Streszczenie:
Query optimization has been a challenging problem ever since the relational data model had been proposed. The role of the query optimizer in a database system is to compute an execution plan for a (relational) query expression comprised of physical operators whose implementations correspond to the operations of the (relational) algebra. There are many degrees of freedom for selecting a physical plan, in particular due to the laws of associativity, commutativity, and distributivity among the operators in the (relational) algebra, which necessitates our taking the order of operations into consideration. In addition, there are many alternative access paths to a dataset and a multitude of physical implementations for operations, such as relational joins (e.g., merge-join, nestedloop join, hash-join). Thus, when seeking to determine the best (or even a sufficiently good) execution plan there is a huge search space.
Style APA, Harvard, Vancouver, ISO itp.
7

Modi, Abhishek, Kaushik Rajan, Srinivas Thimmaiah, Prakhar Jain, Swinky Mann, Ayushi Agarwal, Ajith Shetty, Shahid K. I, Ashit Gosalia i Partho Sarthi. "New query optimization techniques in the Spark engine of Azure synapse". Proceedings of the VLDB Endowment 15, nr 4 (grudzień 2021): 936–48. http://dx.doi.org/10.14778/3503585.3503601.

Pełny tekst źródła
Streszczenie:
The cost of big-data query execution is dominated by stateful operators. These include sort and hash-aggregate that typically materialize intermediate data in memory, and exchange that materializes data to disk and transfers data over the network. In this paper we focus on several query optimization techniques that reduce the cost of these operators. First, we introduce a novel exchange placement algorithm that improves the state-of-the-art and significantly reduces the amount of data exchanged. The algorithm simultaneously minimizes the number of exchanges required and maximizes computation reuse via multi-consumer exchanges. Second, we introduce three partial push-down optimizations that push down partial computation derived from existing operators ( group-bys , intersections and joins ) below these stateful operators. While these optimizations are generically applicable we find that two of these optimizations ( partial aggregate and partial semi-join push-down ) are only beneficial in the scale-out setting where exchanges are a bottleneck. We propose novel extensions to existing literature to perform more aggressive partial push-downs than the state-of-the-art and also specialize them to the big-data setting. Finally we propose peephole optimizations that specialize the implementation of stateful operators to their input parameters. All our optimizations are implemented in the spark engine that powers azure synapse. We evaluate their impact on TPCDS and demonstrate that they make our engine 1.8X faster than Apache Spark 3.0.1.
Style APA, Harvard, Vancouver, ISO itp.
8

Ling, Huixuan, Tian Gao, Tao Gong, Jiangzhao Wu i Liang Zou. "Hydraulic Rock Drill Fault Classification Using X−Vectors". Mathematics 11, nr 7 (4.04.2023): 1724. http://dx.doi.org/10.3390/math11071724.

Pełny tekst źródła
Streszczenie:
Hydraulic rock drills are widely used in drilling, mining, construction, and engineering applications. They typically operate in harsh environments with high humidity, large temperature differences, and vibration. Under the influence of environmental noise and operational patterns, the distributions of data collected by sensors for different operators and equipment differ significantly, which leads to difficulty in fault classification for hydraulic rock drills. Therefore, an intelligent and robust fault classification method is highly desired. In this paper, we propose a fault classification technique for hydraulic rock drills based on deep learning. First, considering the strong robustness of x−vectors to the features extracted from the time series, we employ an end−to−end fault classification model based on x−vectors to realize the joint optimization of feature extraction and classification. Second, the overlapping data clipping method is applied during the training process, which further improves the robustness of our model. Finally, the focal loss is used to focus on difficult samples, which improves their classification accuracy. The proposed method obtains an accuracy of 99.92%, demonstrating its potential for hydraulic rock drill fault classification.
Style APA, Harvard, Vancouver, ISO itp.
9

Raskov, V., M. Sharapov i E. Blank. "Development of technology for electron beam welding of impellers of centrifugal pumps for ships and offshore structures operated in the Arctic". Transactions of the Krylov State Research Centre 3, nr 397 (6.08.2021): 133–40. http://dx.doi.org/10.24937/2542-2324-2021-3-397-133-140.

Pełny tekst źródła
Streszczenie:
Object and purpose of research. Centrifugal equipment is widely used in various sectors of industry. One of the main part of centrifugal equipment is the impeller. Application of impellers in shipbuilding is a promising field, in particular for fail-free operation in harsh Arctic environment. The purpose of this study is development of manufacturing processes for impellers involving electro-beam welding (EBW) without soldering alloys and final thermal treatment. Materials and methods. The main material chosen for the impeller is the high-strength cold-resistant steel 10ХН3МД. Main results. In the process of technology development, the impeller design was chosen. Welding conditions were optimized on mock-up samples modeling the T-joint of cover plate with vane. Sample tests and investigation were done. Conclusions were made regarding the follow-on work and EBW introduction. Conclusion. EBW technology for manufacturing of impellers was developed making it possible to fabricate impellers of high-strength cold resistant materials, including difficult-to-weld materials.
Style APA, Harvard, Vancouver, ISO itp.
10

Jadon, Jitender Kumar Singh. "Operation of Wireless Humanoid Robot using Graphene Embedded Bend Sensor and Internet of Things Technology". International Journal of Engineering and Advanced Technology 12, nr 1 (30.10.2022): 19–22. http://dx.doi.org/10.35940/ijeat.a3805.1012122.

Pełny tekst źródła
Streszczenie:
The use of Robots is a trending technology but automation and Artificial Intelligence are not fully achieved till date. This paper aims to propose an innovative system to integrate human intelligence with Robotics. The robots which have been designed to work in harsh conditions are controlled using graphene-based flexible bend sensors. These sensors are applied to the human body and are powered by solar energy. Here a flexible sensor is applied on each bend on the human body and respective data of bend angle is transmitted to the raspberry pi 3 model B kits which are programmed to act accordingly and the same bend is obtained in the Robot. The sensor which we have used in this project removes the messy wiring and there is no need to wear any kind of suit. The required movements for the robot are produced by a human after applying the sensors on each joint. It looks like a pasting that is pasted across the joint. These sensors are made from a biocompatible material, thus does not have any dermatological ill effect on the operator. The graphene-based sensor has a subsequent role in robotics as they develop position matrices that determine the current position of various members of the humanoid robot. Robotic application demands sensors with a higher degree of repeatability, precision, and reliability which is obtained using the Graphene-based bend sensors. Each sensor is self-capable to carry out motion of one degree of motion. The use of an accelerometer attached along with the sensor helps to control the speed of robotic operation. This system is suitable to control the robot from a distance and uses it in critical conditions with the intelligence of the human being who is operating it, the rise in temperature leads to an increase in the time-lapse in command and action. But still, it can be treated as the substitute for artificially intelligent robots as we have not reached the level of intelligence in human beings. This work is based on the combined concepts of mechanical, computer, and electronics engineering.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Hash Join Operator"

1

Albuquerque, Danielle da Costa Filgueiras. "MFP : uma política eficiente de liberação de memória para o operador físico Hash-Merge Join". Universidade de Fortaleza, 2007. http://dspace.unifor.br/handle/tede/76600.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2019-04-05T23:08:59Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-12-18
Mobile computers and wireless communication technologies are already a reality in the modern IT environment, resulting in the paradigm of Mobile Computing. These mobile devices may have BD and their data may be shared. However, the junction operators of the conventional search operators do not take into account the limitations of a mobility-supporting environment, such as a disconnection from the communication network, a narrow communication bandwidth, the battery charge level, etc. Therefore, the junction algorithms need to be adjusted to the limitations of the mobile computing in order to render satisfactory results and execute a search requested by the user within a reasonable period of time. The necessary characteristics for an algorithm to be executed in a mobilitysupporting environment are: (1) incremental production of results as the data become available; (2) continuous processing of the search, even if the delivery of data is blocked; and (3) reaction to limited memory situations during the execution of the search [1]. It was evidenced through studies and tests that the Hash-Merge Join (HMJ) is more efficient to guarantee these 3 characteristics (properties) needed when working with junctions in an efficient fashion within a mobile computing environment. When the memory is at its full storage capacity, the HMJ algorithm releases memory partitions according to the memory status. This adaptation to the memory status determines the best pair of partitions, being each partition from a different source, to be sent to the disk in a way that maximizes the time until the next memory overflow. A memory overflow occurs when the memory reaches its storage capacity. The aim of this work is to propose a new memory data flushing policy MFP. This policy offers an optimization of the Adaptive Flushing Policy (AFP) while keeping the same main goal, i.e. to send pairs of partitions to the disk in the event of a memory overflow. The AFP releases corresponding pairs of partitions to the disk based on a summary table kept in memory and on two parameters: (1) memory balance; and (2) minimum partition size. The summary table in memory contains the size of each partition, the sum of the individual sizes of each pair of partitions of both lists and the total size of partitions of each list. 8 The MFP also releases corresponding pairs of partitions to the disk, based on a constant memory balance and a summary table different from the summary table used by the AFP. The summary table of this new policy has one more column, which states the cardinality difference of each pair of partitions of the input lists. The main goal of this new column is to guarantee that the pair of partitions chosen to be sent to the disk will always leave a balanced memory, while it guarantees that there will always be at least one pair of partitions to be sent to the disk.
Em uma MDBC as técnicas de processamento de consultas devem ser adaptadas para lidarem com a instabilidade do ambiente, assim como limitação de recursos, por exemplo memórias limitadas dos computadores móveis. Vejamos um exemplo: as fontes de dados podem ter as taxas de entregas de tuplas previstas pelo otimizador de consultas, no entanto devido a uma desconexão de uma das fontes de dados da rede sem fio, tal fonte de dados ficará desconectada da rede e por conseqüência, não poderá entregar suas tuplas temporariamente, logo o operador de junção ficará parado(bloqueado). Para processar consultas em uma MDBC, os algoritmos de junção precisam ter as características seguintes: (1) produção incremental de resultados à medida que os dados são disponibilizados, (2) continuidade no processamento da consulta mesmo que a entrega dos dados esteja bloqueada, e (3) reação a situações de limitação de memória durante a execução da consulta[Erro! A origem da referência não foi encontrada.]. Esse trabalho é propõe uma nova política de liberação de dados da memória, chamada Mobile Flushing Policy(MFP). A política MFP propõe uma otimização da política Adaptive Flushing Policy(AFP), usada pelo algoritmo de junção Hash-Merge Join, mantendo o mesmo objetivo principal, enviar pares de partições para o disco, em caso de ocorrência de overflow de memória. A política AFP libera pares correspondentes de partições para o disco com base em uma tabela resumo mantida em memória e dois parâmetros: (1) balanceamento de memória e (2) tamanho mínimo de partições. Esta política mantém uma tabela resumo em memória contendo o tamanho de cada partição, o somatório dos tamanhos de cada par de partições de ambas as relações e o tamanho total de partições de cada relação. A política MFP também libera pares correspondentes de partições para o disco com base em uma constante de balanceamento de memória e uma tabela resumo diferenciada da tabela resumo utilizada pela política AFP. A tabela resumo da nova política tem uma coluna a mais discriminando a diferença da cardinalidade de cada par de partições das relações de entrada. O objetivo principal da nova coluna é garantir que o par de partições escolhido para ser enviado ao disco deixará a memória balanceada, além de garantir que sempre haverá no mínimo um par de partições a ser enviado ao disco.
Style APA, Harvard, Vancouver, ISO itp.
2

Garg, Vishesh. "Towards Designing PCM-Conscious Database Systems". Thesis, 2016. https://etd.iisc.ac.in/handle/2005/4889.

Pełny tekst źródła
Streszczenie:
Phase Change Memory (PCM) is a recently developed non-volatile memory technology that is expected to provide an attractive combination of the best features of conventional disks (persistence, capacity) and of DRAM (access speed). For instance, it is about 2 to 4 times denser than DRAM, while providing a DRAM-comparable read latency. On the other hand, it consumes much less energy than magnetic hard disks while providing substantively smaller write latency. Due to this suite of desirable features, PCM technology is expected to play a prominent role in the next generation of computing systems, either augmenting or replacing current components in the memory hierarchy. A limitation of PCM, however, is that there is a significant difference between the read and write behaviors in terms of energy, latency and bandwidth. A PCM write, for example, consumes 6 times more energy than a read. Further, PCM has limited write endurance since a memory cell becomes unusable after the number of writes to the cell exceeds a threshold determined by the underlying glass material. Database systems, by virtue of dealing with enormous amounts of data, are expected to be a prime beneficiary of this new technology. Accordingly, recent research has investigated how database engines may be redesigned to suit DBMS deployments on PCM, covering areas such as indexing techniques, logging mechanisms and query processing algorithms. Prior database research has primarily focused on computing architectures wherein either a) PCM completely replaces the DRAM memory ; or b) PCM and DRAM co-exist side-by-side and are independently controlled by the software. However, a third option that is gaining favor in the architecture community is where the PCM is augmented with a small hardware-managed DRAM buffer. In this model, which we refer to as DRAM HARD, the address space of the application maps to PCM, and the DRAM buffer can simply be visualized as yet another level of the existing cache hierarchy. With most of the query processing research being preoccupied with the first two models, this third model has remained largely ignored. Moreover, even in this limited literature, the emphasis has been restricted to exploring execution-time strategies; the compile-time plan selection process itself being left unaltered. In this thesis, we propose minimalist reworkings of current implementations of database operators, that are tuned to the DRAM HARD model, to make them PCM-conscious. We also propose novel algorithms for compile-time query plan selection, thereby taking a holistic approach to introducing PCM-compliance in present-day database systems. Specifically, our contributions are two-fold, as outlined below. First, we address the pragmatic goal of minimally altering current implementations of database operators to make them PCM-conscious, the objective being to facilitate an easy transition to the new technology. Specifically, we target the implementations of the \workhorse" database operators: sort, hash join and group-by. Our customized algorithms and techniques for each of these operators are designed to significantly reduce the number of writes while simultaneously saving on execution times. For instance, in the case of sort operator, we perform an in-place partitioning of input data into DRAM-sized chunks so that the subsequent sorting of these chunks can finish inside the DRAM, consequently avoiding both intermediate writes and their associated latency overheads. Second, we redesign the query optimizer to suit the new environment of PCM. Each of the new operator implementations is accompanied by simple but effective write estimators that make these implementations suitable for incorporation in the optimizer. Current optimizers typically choose plans using a latency-based costing mechanism assigning equal costs to both read and write memory operations. The asymmetric read-write nature of PCM implies that these models are no longer accurate. We therefore revise the cost models to make them cognizant of this asymmetry by accounting for the additional latency during writes. Moreover, since the number of writes is critical to the lifespan of a PCM device, a new metric of write cost is introduced in the optimizer plan selection process, with its value being determined using the above estimators. Consequently, the query optimizer needs to select plans that simultaneously minimize query writes and response times. We propose two solutions for handling this dual-objective optimization problem. The first approach is a heuristic propagation algorithm that extends the widely used dynamic programming plan propagation procedure to drastically reduce the exponential search space of candidate plans. The algorithm uses the write costs of sub-plans at each of the operator nodes to decide which of them can be selectively pruned from further consideration. The second approach maps this optimization problem to the linear multiple-choice knapsack problem, and uses its greedy solution to return the nal plan for execution. This plan is known to be optimal within the set of non interesting-order plans in a single join order search space. Moreover, it may contain a weighted execution of two algorithms for one of the operator nodes in the plan tree. Therefore overall, while the greedy algorithm comes with optimality guarantees, the heuristic approach is advantageous in terms of easier implementation. The experimentation for our proposed techniques is conducted on Multi2sim, a state-of the- art cycle-accurate simulator. Since it does not have native support for PCM, we made a major extension to its existing memory module to model PCM device. Specifically, we added separate data tracking functionality for the DRAM and PCM resident data, to implement the commonly used read-before-write technique for PCM writes reduction. Similarly, modifications were made to Multi2sim's timing subsystem to account for the asymmetric read-write latencies of PCM. A new DRAM replacement policy called N-Chance, that has been shown to work well for PCM-based hardware, was also introduced. Our new techniques are evaluated on end-to-end TPC-H benchmark queries with regard to the following metrics: number of writes, response times and wear distribution. The experimental results indicate that, in comparison to their PCM-oblivious counterparts, the PCM-conscious operators collectively reduce the number of writes by a factor of 2 to 3, while concurrently improving the query response times by about 20% to 30%. When combined with the appropriate plan choices, the improvements are even higher. In the case of Query 19, for instance, we obtained a 64% savings in writes, while the response time came down to two-thirds of the original. In essence, our algorithms provide both short-term and long-term benefits. These outcomes augur well for database engines that wish to leverage the impending transition to PCM-based computing.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Hash Join Operator"

1

Ovsyannikov, Evgeniy, i Tamara Gaytova. Optimal control of traction electric drives. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1141767.

Pełny tekst źródła
Streszczenie:
The monograph considers various types of traction electric drives of motor vehicles intended for operation in urban conditions. Mathematical models of these systems are proposed. On the basis of parametric optimization and graphoanalytic method, a method of joint control of electric drives according to the criteria of minimum losses and maximum overload capacity, taking into account possible restrictions on the resources of power elements, has been developed. For a wide range of readers interested in improving motor vehicles. It will be useful for students, postgraduates and teachers of engineering and technical universities.
Style APA, Harvard, Vancouver, ISO itp.
2

Donald A, Timm. Part II Commentaries to Typical Sofa Rules, 28 The ‘Joint Commission’ Liaison Mechanism. Oxford University Press, 2018. http://dx.doi.org/10.1093/law/9780198808404.003.0028.

Pełny tekst źródła
Streszczenie:
This chapter discusses a solution for coordination problems developed by the US in conjunction with the individual Sending States in whose territory the US has been invited to send its forces in peacetime. Although each individual case has its differences due to different sovereigns, different times of development, and different sizes or missions of the forces involved, there are nonetheless many conceptual similarities which transcend these differences and which may recommend themselves as a guide. The core similarity is the concept of a single overarching binational body charged with overseeing the implementation of the status-of-forces agreement (SOFA) and facilitating communication and cooperation between the cognizant authorities of the two sovereigns. This chapter discusses the general attributes of the ‘Joint Commission’ liaison mechanism in particular. It explains the purpose of the mechanism, its structure, its operation and authority, and the administration of the Joint Commission structure.
Style APA, Harvard, Vancouver, ISO itp.
3

Norah, Gallagher, i Shan Wenhua. 6 Umbrella Clause and Investment Contracts. Oxford University Press, 2009. http://dx.doi.org/10.1093/law:iic/9780199230259.003.006.

Pełny tekst źródła
Streszczenie:
The “umbrella clause” takes its name from its main objective, namely to oblige the host state to observe any commitments it has entered into with regard to foreign investors. The clause brings such obligations of the state under the protection of an applicable international investment treaty, bilateral investment treaty (BIT), or multilateral treaty. This chapter begins by reviewing the evolution of the umbrella clause and how it has been applied by investment treaty tribunals. It then examines the main variants of umbrella clauses in Chinese BITs and discusses their legal effect in light of this recent jurisprudence. It moves on to analyze the impact, if any, of these clauses on investment contracts in China, including joint venture contracts, joint exploitation of onshore and offshore petroleum resources contracts, and build-operate-transfer contracts. The chapter concludes with an analysis of the implications of umbrella clauses and investment contracts on dispute-resolution planning for foreign investors.
Style APA, Harvard, Vancouver, ISO itp.
4

Lassiter, Daniel. Measurement theory and the typology of scales. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198701347.003.0002.

Pełny tekst źródła
Streszczenie:
Most previous work on graded modality has relied on qualitative orderings, rather than degree semantics. This chapter introduces Representational Theory of Measurement (RTM), a framework which makes it possible to translate between qualitative and degree-based scales. I describe a way of using RTM to extend the compositional degree semantics introduced in chapter 1 to qualitative scales. English data are used to motivate the application of the RTM discussion between ordinal, interval, and ratio scales to scalar adjectives, with special attention to the kinds of statements that are semantically interpretable relative to different scale types. I also propose and motivate empirically a distinction between ‘additive’ and ‘intermediate’ scales, which interact differently with the algebraic join operation (realizing sum formation or disjunction, depending on the domain). This distinction is reflected in inferential properties of non-modal adjectives in English, and is also important for the analysis of graded modality in later chapters.
Style APA, Harvard, Vancouver, ISO itp.
5

Matthew Kynes, J. Hemophilia (Presentation in Emergency Surgery). Redaktorzy Matthew D. McEvoy i Cory M. Furse. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190226459.003.0085.

Pełny tekst źródła
Streszczenie:
Hemophilia is a complex disease of variable severity that affects clotting function and has significant implications in perioperative and emergency care. Hereditary or de novo mutations cause deficiencies in factor VIII or IX production, which may manifest as spontaneous bleeding into joint spaces, muscles, or other sites in severe forms of the disease. Intracranial bleeding is one of the most serious and often fatal complications. In a patient with abnormal bleeding, laboratory results indicative of hemophilia include an increased partial prothromboplastin time (PT), with normal prothrombin time/international normalized ratio (PTT/INR) and normal platelet count. The diagnosis is confirmed with specific factor assays. Advances in prophylaxis with factor replacement have improved outcomes and reduced bleeding episodes in hemophilia. However, patients with hemophilia may present emergently for operation and require factor replacement. In patients that have developed antibodies to factor replacement, clotting factor bypass agents may be required to control bleeding.
Style APA, Harvard, Vancouver, ISO itp.
6

Vasconcelos Júnior, Moisés Rita. Implantação do aterro sanitário no município de Marituba-PA e os efeitos sobre as comunidades do entorno. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-153-0.

Pełny tekst źródła
Streszczenie:
The municipality of Marituba, Metropolitan Region of Belém - RMB, has suffered environmental impacts due to irregularities in the landfill operation implemented in 2015, which triggered social impacts perceived by all the population, including neighboring municipalities, such as Ananindeua and Belém Protests were carried out by the Movement Outside the Garbage that is constituted by the dwellings of the surrounding neighborhoods to the place where the embankment is located, of owners of commercial activities linked to the tourism and Non Governmental Organizations that interrupted several times the transit of the main route that interconnects the seven municipalities of the RMB and the entrance of the embankment, in order to draw the attention of the municipal public power to the problems that the population would have been facing ever since. From this, the following questions arose: What social impacts would people be making in these protests? Would such problems be directly related to the activities carried out in the landfill? And finally, what are the actions of the public authority and the company that manages the enterprise in the management of these social impacts? The relevance of this study concerns not only the identification of social impacts considering the fragility of this approach in the Environmental Impact Studies and concomitantly in the Reports of Environmental Impacts, but also, from the point of view of the debate about the licensing process of enterprises of this nature and employment and the need for the joint use of environmental and urban policy instruments, considering that RMB municipalities have not yet used sustainable alternatives for the reduction of solid waste produced in their territories, as well as the reduction of environmental impacts caused by dumps , and in the case of Marituba, of the landfill that operates outside the standards established by the Brazilian Association of Technical Standards - ABNT, which is responsible for the management and treatment of solid waste and the National Policy on Solid Waste - PNRSN.
Style APA, Harvard, Vancouver, ISO itp.
7

Cynthia, Roberts, Leslie Armijo i Saori Katada. Conclusion. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190697518.003.0005.

Pełny tekst źródła
Streszczenie:
The chapter analyzes the prospects for continued BRICS collective financial statecraft. Contrary to initial expectations, the BRICS (Brazil, Russia, India, China, and South Africa) have hung together by identifying common aversions and pursuing common interests within the existing international order. Their future depends not only on their bargaining power, but also on their ability to overcome domestic impediments to the sustainable economic growth that provides the basis for their international positions. To continue successfully with collective financial statecraft, the members must tackle the so-called middle-income trap, as well as their preferences for informal rules originating from their own institutional weaknesses or regime preferences. This study shows that, in the context of a global power shift, the BRICS club has operated to protect the member countries’ respective policy autonomy, while also advancing their joint voice in global governance. Recently, the BRICS have made concrete institutional gains, giving them expanded outside options to achieve specific objectives in global finance.
Style APA, Harvard, Vancouver, ISO itp.
8

Moscovitch, Keren. Radical Intimacy in Contemporary Art. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350298217.

Pełny tekst źródła
Streszczenie:
Radical Intimacy in Contemporary Art focuses on practices that operate at the edges of sexuality and its socially sanctioned expressions. Using psychoanalysis and object-oriented feminism, Keren Moscovitch focuses on the work of several contemporary, provocative artists to initiate a dialogue on the role of intimacy in challenging and reimagining ideology. Moscovitch suggests that intimacy has played an under-appreciated role in the shifting of social and political consciousness. She explores the work of Leigh Ledare, Genesis P-Orridge, Ellen Jong, Barbara DeGenevieve, Joseph Maida and Lorraine O’Grady, who, through their radical practices, engage in such consciousness shifting in elegant, surprising, and provocative ways. Guided by the feminist psychoanalytic canon of Julia Kristeva throughout, as well as being informed by the philosophy of Luce Irigaray and the critical theory of Judith Butler, Moscovitch situates these artists in the emerging lineage of feminist new materialism. She argues that the instability of intimacy leads to radical and performative objecthood in their work that acts as a powerful expression of revolt. Through this line of argumentation, Moscovitch joins a growing group of philosophers exploring object-oriented theories and practices as a new language for a new era. In this era, the hegemony of subjectivity has been toppled, and a new world of human ontology is built creatively, expressively and in the spirit of revolt.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Hash Join Operator"

1

Zwicklhuber, Thomas, i Mario Kaufmann. "EURIS (European River Information Services System) – The Central European RIS Platform". W Lecture Notes in Civil Engineering, 850–56. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6138-0_75.

Pełny tekst źródła
Streszczenie:
AbstractThe development and implementation of the River Information Services (RIS) concept started in the end 1990s with various research projects followed by national or regional implementation projects in the first decade of this century. The resulting national RIS systems haven’t been able to exploit the full potential of RIS when it comes to cross-border data exchange and interoperability. To overcome these gaps the concept of RIS Corridor Management was established aiming at linking the fragmented services together on a corridor to supply RIS along the complete route or network. The concept of RIS Corridor Management was taken up by the CEF (Connecting Europe Facility Programme) co-funded multi-beneficiary project RIS COMEX (www.riscomex.eu) with the goal to implement harmonized RIS services on European level. Within the RIS COMEX project the consortium of 13 countries realized a common and centralized single access point to Inland Waterway Information, the European River Information Services (EuRIS) System. EuRIS acts as European RIS platform fulfilling a great variety of information needs of inland waterway stakeholders like skippers, vessel and infrastructure operators, logistics and authorities. The system gathers relevant RIS information from the national systems in order to provide optimized fairway-, infrastructure- and traffic-related services in a single point of access for the users enabling reliable route- and voyage planning and sharing as well as traffic- and transport management on pan-European level. EuRIS provides access to its services via a user-friendly Graphical User Interface (GUI) or machine-readable Open Application Programming Interfaces (API).In order to guarantee sustainable operation of EuRIS a legal, organizational and financial framework has been setup by the partners. The core aspects concern the joint governance of the system operation as well as the legal basis for RIS data exchange and usage. The full operation and further development of EuRIS is a major milestone in the sector enhancing attractiveness and competitiveness of Inland Waterway Transport in Europe and setting the basis for connectivity to other transport modes and synchro modal logistic operations.
Style APA, Harvard, Vancouver, ISO itp.
2

Möller, Bernhard, Peter O’Hearn i Tony Hoare. "On Algebra of Program Correctness and Incorrectness". W Relational and Algebraic Methods in Computer Science, 325–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88701-8_20.

Pełny tekst źródła
Streszczenie:
AbstractVariants of Kleene algebra have been used to provide foundations of reasoning about programs, for instance by representing Hoare Logic (HL) in algebra. That work has generally emphasised program correctness, i.e., proving the absence of bugs. Recently, Incorrectness Logic (IL) has been advanced as a formalism for the dual problem: proving the presence of bugs. IL is intended to underpin the use of logic in program testing and static bug finding. Here, we use a Kleene algebra with diamond operators and countable joins of tests, which embeds IL, and which also is complete for reasoning about the image of the embedding. Next to embedding IL, the algebra is able to embed HL, and allows making connections between IL and HL specifications. In this sense, it unifies correctness and incorrectness reasoning in one formalism.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Yu, Xiaodong Wang, Zhixiang Min, Shiqiang Wu, Xiufeng Wu, Jiangyu Dai, Fangfang Wang i Ang Gao. "Adaptive Regulation of Cascade Reservoirs System Under Non-stationary Runoff". W Lecture Notes in Civil Engineering, 985–1000. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6138-0_88.

Pełny tekst źródła
Streszczenie:
AbstractUnder the influence of climate change and human activities, the spatial and temporal distribution of river runoff has changed. The statistical characteristics of runoff such as mean, variance and extreme values have changed significantly. Hydrological stationarity has been broken, deepening the uncertainty of water resources and their utilization. Hydrological stationarity is a fundamental assumption of traditional water resources planning and management. The occurrence of non-stationarity will undoubtedly have an impact on the operation and overall benefits of reservoirs, and may even threaten the safety of reservoirs and water resources. There is uncertainty as to whether reservoirs can operate safely and still achieve their design benefits under the new runoff conditions. Therefore, it is important to carry out adaptive regulation of reservoirs in response to non-stationary runoff. Based on the multi-objective theory of large system, a multi-objective joint scheduling model of the terrace reservoir group is constructed for adaptive regulation simulation. A set of combination schemes based on optimal scheduling, flood resource utilization, water saving is constructed. The adaptive regulation is validated using a real-world example of the Xiluodu cascade and Three Gorges cascade reservoirs system in Yangtze River, China. The adaptive regulation processes are analyzed by simulation and the adaptive regulation effects are evaluated. The results show that the non-stationary runoff in upper Yangtze River has had an impact on the comprehensive benefits of large hydropower projects. The use of non-engineering measures to improve flood resource utilization, adjust upstream water use behavior and optimize reservoir scheduling are effective means to reduce the negative impact of non-stationary runoff on cascade reservoirs system.
Style APA, Harvard, Vancouver, ISO itp.
4

Lauer, Sascha, Sebastian Rieck, Martin-Christoph Wanner i Wilko Flügge. "Partial Automated Multi-Pass-Welding for Thick Sheet Metal Connections". W Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021, 399–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-74032-0_33.

Pełny tekst źródła
Streszczenie:
AbstractThe production of tubular-node-connections, which are required for the construction of offshore wind energy plants or converter platforms, is subject to high manufacturing standards. The welding process is currently carried out manually and requires a great deal of experience on the part of the welder. In this process, one or more branch member pipes are welded to a base pipe, which vary in their diameters and alignment to each other. This results in a small batch size for which no standard automation solution can be considered. The approach of a pre-defined offline path-planning is not expedient, since the weld metal forms differently with the multiple curved geometries and the desired target result cannot be achieved with an integrated compensation. The approach for automation combines the experience of a skilled welder with the accuracy of an industrial-robot. For implementation, the robot system moves along the welding contour with a 2D-profile sensor. The joint profile is recorded at defined measurement points. Parallel to the seam cross-section, the current gradient of the geometry in relation to the horizontal plane is stored. After all the information has been generated, it is visualized for the operator in a graphical user interface. The operator can use his experience in the field of welding technology and can carry out the positioning of the weld seam in every single scan generated. The decisions on positioning are stored in the system and serve as a base for a future implementation of an automatic system for positioning welding beads on multi-curved contours.
Style APA, Harvard, Vancouver, ISO itp.
5

Migone, Andrea, Alexander Howlett i Michael Howlett. "The Significance of Politicization: The F-35 Joint Strike Fighter Procurement Processes in Canada and Australia, 2000–2022". W Procurement and Politics, 59–91. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25689-9_4.

Pełny tekst źródła
Streszczenie:
AbstractWhen examining the acquisition of the F-35 Joint Strike Fighter (JSF) by most western defence departments, the close historical and military connections between these countries with the United States only go so far in terms of an explanation for why some have adopted this platform and others have not. Adoption is by no means automatic and the features of large military platform procurement processes, which are both long-term and involve very large expenditures, with their idiosyncratic nature, multiple actors and strategic policy directions, have played an important role in this area, just as they have when purchasing warships and other military equipment. In this chapter, we compare how Australia and Canada chose to operate when considering the replacement of their ageing F-18 multirole fighters. Again, this process features two very similar countries and the same weapons system, and the different outcome each has had in this case again reveals the significant factors concerning the processes which led to those decisions, and the impact of politics in explaining both the commonalities and differences in the defence procurement approaches of the two countries.
Style APA, Harvard, Vancouver, ISO itp.
6

Meng, Xiangxiu, Xuejun Zhu, Yunpeng Ding i Dengrong Qi. "Application of Image Recognition in Precise Inoculation Control System of Pleurotus Eryngii". W Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications, 988–1005. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_100.

Pełny tekst źródła
Streszczenie:
AbstractThe traditional inoculation technology of Pleurotus eryngii is artificial inoculation, which has the disadvantages of low efficiency and high failure rate. In order to solve this problem, it is necessary to put forward the automatic control system of Pleurotus eryngii inoculation. In this paper, based on the system of high reliability, high efficiency, flexible configuration and other performance requirements, PLC is used as the core components of the control system and control the operation of the whole system. In order to improve the efficiency of the control system, the particle swarm optimization algorithm was used to optimize the interpolation time of the trajectory of the manipulator. Through simulation, it was found that the joint acceleration curve was smooth without mutation, and the running time was short. Because the position deviation of the Culture medium of Pleurotus eryngii to be inoculated will inevitably occur when it is transferred on the conveyor belt, the image recognition technology is used to accurately locate them. In order to improve the efficiency of image recognition, the genetic algorithm (GA) is used to improve Otsu to find the target region of Culture medium of Pleurotus eryngii to be inoculated, and the simulation results showed that the computational efficiency could be increased by 70%. In order to locate the center of the target region, the mean value method is used to find their centroid coordinates. At last, it is found by simulation that the centroid coordinates could be accurately calculated for a basket of 12 Pleuroides eryngii medium to be inoculated.
Style APA, Harvard, Vancouver, ISO itp.
7

Chebotko, Artem, i Shiyong Lu. "Nested Optional Join for Efficient Evaluation of SPARQL Nested Optional Graph Patterns". W Advances in Semantic Web and Information Systems, 281–308. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-992-2.ch013.

Pełny tekst źródła
Streszczenie:
Relational technology has shown to be very useful for scalable Semantic Web data management. Numerous researchers have proposed to use RDBMSs to store and query voluminous RDF data using SQL and RDF query languages. This chapter studies how RDF queries with the so called well-designed graph patterns and nested optional patterns can be efficiently evaluated in an RDBMS. The authors propose to extend relational algebra with a novel relational operator, nested optional join (NOJ), that is more efficient than left outer join in processing nested optional patterns of well-designed graph patterns. They design three efficient algorithms to implement the new operator in relational databases: (1) nested-loops NOJ algorithm, NL-NOJ, (2) sort-merge NOJ algorithm, SM-NOJ, and (3) simple hash NOJ algorithm, SH-NOJ. Using a real life RDF dataset, the authors demonstrate the efficiency of their algorithms by comparing them with the corresponding left outer join implementations and explore the effect of join selectivity on the performance of these algorithms.
Style APA, Harvard, Vancouver, ISO itp.
8

Weiner, Andreas M., i Theo Härder. "A Framework for Cost-Based Query Optimization in Native XML Database Management Systems". W Advanced Applications and Structures in XML Processing, 160–83. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-727-5.ch008.

Pełny tekst źródła
Streszczenie:
Since the very beginning of query processing in database systems, cost-based query optimization has been the essential strategy for effectively answering complex queries on large documents. XML documents can be efficiently stored and processed using native XML database management systems. Even though such systems can choose from a huge repertoire of join operators (e. g., Structural Joins and Holistic Twig Joins) and various index access operators to efficiently evaluate queries on XML documents, the development of full-fledged XML query optimizers is still in its infancy. Especially the evaluation of complex XQuery expressions using these operators is not well understood and needs further research. The extensible, rule-based, and cost-based XML query optimization framework proposed in this chapter, serves as a testbed for exploring how and whether well-known concepts from relational query optimization (e. g., join reordering) can be reused and which new techniques can make a significant contribution to speed-up query execution. Using the best practices and an appropriate cost model that will be developed using this framework, it can be turned into a robust cost-based XML query optimizer in the future.
Style APA, Harvard, Vancouver, ISO itp.
9

Naeem, M. Asif, i Noreen Jamil. "Online Processing of End-User Data in Real-Time Data Warehousing". W Improving Knowledge Discovery through the Integration of Data Mining Techniques, 13–31. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8513-0.ch002.

Pełny tekst źródła
Streszczenie:
Stream-based join algorithms are a promising technology for modern real-time data warehouses. A particular category of stream-based joins is a semi-stream join where a single stream is joined with a disk based master data. The join operator typically works under limited main memory and this memory is generally not large enough to hold the whole disk-based master data. Recently, a seminal join algorithm called MESHJOIN (Mesh Join) has been proposed in the literature to process semi-stream data. MESHJOIN is a candidate for a resource-aware system setup. However, MESHJOIN is not very selective. In particular, MESHJOIN does not consider the characteristics of stream data and its performance is suboptimal for skewed stream data. This chapter presents a novel Cached-based Semi-Stream Join (CSSJ) using a cache module. The algorithm is more appropriate for skewed distributions, and we present results for Zipfian distributions of the type that appear in many applications. We conduct a rigorous experimental study to test our algorithm. Our experiments show that CSSJ outperforms MESHJOIN significantly. We also present the cost model for our CSSJ and validate it with experiments.
Style APA, Harvard, Vancouver, ISO itp.
10

Galindo, Jose, Angelica Urrutia i Mario Piattini. "FSQL". W Fuzzy Databases, 179–258. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-324-1.ch007.

Pełny tekst źródła
Streszczenie:
The SQL language was essentially developed by Chamberlin and Boyce (1974) and Chamberlin et al. (1976). In 1986, the American National Standard Institute (ANSI) and the International Standards Organization (ISO) published the standard SQL-86 or SQL1 (ANSI, 1986). In 1989, an extension of the SQL standard, called SQL-89, was published, and SQL2 or SQL-92 was published in 1992 (ANSI, 1992). SQL2 basically provided new types, constraints (such as checks or unique predicates), it supported subqueries in UPDATE and DELETE operations, and in the FROM clause, operator IN, ANY and ALL, CASE constructor, JOIN, UNION, INTERSECT and EXCEPT operators and the modification of base table through views. In the latest version of SQL standard, SQL 2003, major improvements have been made in a number of key areas. Firstly, it has additional object-relational features, which were first introduced in SQL-1999. Secondly, SQL 2003 standard revolutionizes SQL with comprehensive OLAP features and data-mining applications. Thirdly, SQL 2003 integrates popular XML standards into SQL (SQL/XML). Finally, numerous improvements have been made throughout the SQL 2003 standard to refine existing features.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Hash Join Operator"

1

Devarajan, N., S. Navneeth i S. Mohanavalli. "GPU accelerated relational hash join operation". W 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2013. http://dx.doi.org/10.1109/icacci.2013.6637294.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rashid, Layali, Wessam M. Hassanein i Moustafa A. Hammad. "Exploiting multithreaded architectures to improve the hash join operation". W the 9th workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1509084.1509091.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yu, Jiaguo, Yuming Shen, Menghan Wang, Haofeng Zhang i Philip H. S. Torr. "Learning to Hash Naturally Sorts". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/221.

Pełny tekst źródła
Streszczenie:
Learning to hash pictures a list-wise sorting problem. Its testing metrics, e.g., mean-average precision, count on a sorted candidate list ordered by pair-wise code similarity. However, scarcely does one train a deep hashing model with the sorted results end-to-end because of the non-differentiable nature of the sorting operation. This inconsistency in the objectives of training and test may lead to sub-optimal performance since the training loss often fails to reflect the actual retrieval metric. In this paper, we tackle this problem by introducing Naturally-Sorted Hashing (NSH). We sort the Hamming distances of samples' hash codes and accordingly gather their latent representations for self-supervised training. Thanks to the recent advances in differentiable sorting approximations, the hash head receives gradients from the sorter so that the hash encoder can be optimized along with the training procedure. Additionally, we describe a novel Sorted Noise-Contrastive Estimation (SortedNCE) loss that selectively picks positive and negative samples for contrastive learning, which allows NSH to mine data semantic relations during training in an unsupervised manner. Our extensive experiments show the proposed NSH model significantly outperforms the existing unsupervised hashing methods on three benchmarked datasets.
Style APA, Harvard, Vancouver, ISO itp.
4

Willett, Fred T. "A Method for Evaluating Market Value of Turbine Gaspath Component Alternatives". W 2002 International Joint Power Generation Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/ijpgc2002-26118.

Pełny tekst źródła
Streszczenie:
An economic model was developed to evaluate gas turbine component alternatives for base load combined cycle operation, cyclic duty simple cycle operation, and peaking duty simple cycle operation. Power plant operator value of alternative replacement first stage buckets for a GE Frame 7EA gas turbine is evaluated. The popularity and large installed base of the 7EA has prompted a number of replacement part offerings, in addition to the replacement parts offered by the OEM. A baseline case is established to represent the current bucket repair and replacement situation. Each of the modes of power plant operation is evaluated from both a long-term financial focus and a short-term financial focus. Long-term focus is characterized by a nine-year evaluation period, while short-term focus is based on first year benefit only. Four factors are considered: part price repair price, output increase, and simple cycle efficiency increase. Natural gas and liquid fuels are considered. Two natural gas prices are used; one liquid fuel price is considered. Peak, off-peak, and spot market electricity prices are considered. Two baseline repair price scenarios are evaluated: 50% of new part price and 10% of new part price. The key conclusions can be summarized as: • A reduced-life part with more frequent repair intervals is undesirable, even if the part price is reduced by over 60% and the cooling flow is reduced by 1% W2. • A short-life, “throw-away” part with no required repairs can achieve parity with the baseline if the price is reduced by 25% or more. The operator with a short-term focus will not differentiate between a “throw-away” part and a full-life part. • In general, increased part life has less value to the power plant operator than price reduction or cooling flow reduction. • Repair price (assumed to be 50% of part price) is a relatively small factor for operators with a long-term focus, and no factor at all for operators with a short-term focus. A lower baseline repair price (10% of part price) will decrease the attractiveness of a “throw-away” part, moving the parity point to a 40% price reduction. • A 0.7% W2 reduction in cooling flow has roughly the same first year benefit, at baseline fuel prices, as a 10–15% bucket price reduction, except to the peak duty operator. The peak duty operator finds no benefit to reduced cooling flow unless electricity can be sold at spot market prices.
Style APA, Harvard, Vancouver, ISO itp.
5

Dure, Davis. "Avoiding Increased Trip Times and Other Operational Impacts When Implementing Positive Train Control". W 2010 Joint Rail Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/jrc2010-36260.

Pełny tekst źródła
Streszczenie:
Implementing safety systems on railroads and transit systems to prevent collisions and the risks of excess speeds often come at the price of lengthened trip time, reduced capacity, or both. This paper will recommend a method for designing Positive Train Control (PTC) systems to avoid the degradation of operating speeds, trip times and line capacities which is a frequent by product of train-control systems. One of the more significant operational impacts of PTC is expected to be similar to the impacts of enforcing civil speed restrictions by cab signaling, which is that the safe-braking rate used for signal-system design and which is expected to be used for PTC is significantly more conservative than the service brake rate of the train equipment and the deceleration rate used by train operators. This means that the enforced braking and speed reduction for any given curve speed restriction is initiated sooner than it otherwise would be by a human train operator, resulting in trains beginning to slow and/or reaching the target speed well in advance of where they would absent enforcement. This results in increased trip time, which can decrease track capacity. Another impact of speed enforcement is that it often results in “underspeeding.” In a cab-signal (and manual-train-operation) environment, it has been well documented that train operators typically operate two or three mph below the nominal enforced speed to avoid the risk of penalty brake applications. Target and location speed enforcement under PTC is likely to foster the same behaviors unless the design is prepared to mitigate this phenomenon. While the trip-time and capacity impacts of earlier braking and train-operator underspeeding are generally marginal, that margin can be very significant in terms of incremental capacity and/or resource for recovery from minor perturbations (aka system reliability). The operational and design methodology that is discussed in this paper involves the use of a higher unbalance (cant deficiency) for calculating the safety speed of each curve that is to be enforced by PTC, while retaining the existing maximum unbalance standard and existing speed limits as “timetable speed restrictions”. Train operators will continue to be held responsible for observing the timetable speed limits, while the PTC system would stand ready to enforce the higher safety speeds and unbalance should the train operator fail to properly control his or her train. The paper will present a potential methodology for calculating safety speeds that are in excess of the normal operating speeds. The paper will also calculate using TPC software the trip-time tradeoffs for using or not using this potential concept, for which there are some significant precedents. Other operational impacts, and proposed remedies, will be discussed as well. These will include the issues of total speed enforcement versus safety-speed enforcement, the ability of a railroad’s management to perform the speed checks required by the FRA regulations under normal conditions, and the operation of trains under occasional but expected PTC failures.
Style APA, Harvard, Vancouver, ISO itp.
6

Aghdam, Faranak Fathi, i Haitao Liao. "Prognostics-Based Replacement and Competition in Service Part Procurement". W ASME/ISCIE 2012 International Symposium on Flexible Automation. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/isfa2012-7147.

Pełny tekst źródła
Streszczenie:
Timely maintenance of degrading components is critical to the continuing operation of a system. By implementing prognostics, it is possible for the operator to maintain the system in the right place at the right time. However, the complexity of real-world operating environment makes near-zero downtime difficult to achieve, partly because of a possible shortage of required service parts. To coordinate with a prognostics-based maintenance schedule, it is necessary for the operator to decide when to order the service parts and how to compete with other operators in service part procurement. In this paper, we investigate a situation where two operators are to make prognostics-based replacement decisions and strategically compete for a service part that both operators need at around the same time. A Stackelberg game is formulated in this context. A sequential, constrained maximin space-filling experimental-design approach is developed to facilitate the implementation of backward induction. This approach is efficient in searching the Nash equilibrium when the follower’s best response to the leader’s strategies has no closed-form expression. A numerical study on wind turbine operation is provided to demonstrate the use of the joint decision-making tool in solving such complex, yet realistic maintenance and service part logistic problems.
Style APA, Harvard, Vancouver, ISO itp.
7

Bhopte, Siddharth, Parthiban Arunasalam, Fadi Alsaleem, Arvind Rao i Nataraj Hariharan. "Power Allocation Towards Hermetic Solder Joint Health of High Powered MEMS Chip for Harsh Environment Applications". W ASME 2011 Pacific Rim Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Systems. ASMEDC, 2011. http://dx.doi.org/10.1115/ipack2011-52296.

Pełny tekst źródła
Streszczenie:
Solders have been utilized extensively in the MEMS packaging industry to create vacuum or hermetic seals in a variety of applications. MEMS technology is finding applications in wide range of products like pressure sensors, actuators, flow control devices etc. For many harsh low temperature environment applications, like commercial refrigeration systems, MEMS based pressure sensors and flow actuators are directly mounted on to metal substrates using solders to create hermetic sealing. Solders attaching silicon devices directly to metal substrates may be subjected to very high thermal stresses due to significant difference in thermal expansion coefficients during chip operation or environment temperatures. In this paper, case study of a high powered MEMS chip (referred in the paper as die) operating in a commercial refrigeration system is presented. Accelerated test method for qualifying solder joint for high pressure applications is briefly discussed. Lab experiments showing typical refrigeration cycle thermal load on solder joint are presented. Based on the study, concepts of die power toggling and power allocation towards enhancing hermetic solder joint reliability are discussed. Detailed numerical case studies are presented to quantify the improvement in solder joint reliability due to the proposed concept.
Style APA, Harvard, Vancouver, ISO itp.
8

Thibodeaux, Robert L., Logan E. Smith i Azfar Mahmood. "Optimizing Tubular Running Services Through Digital Solutions – Doing More with Less!" W Offshore Technology Conference Asia. OTC, 2022. http://dx.doi.org/10.4043/31495-ms.

Pełny tekst źródła
Streszczenie:
Abstract Digital transformation is a term that continues to be popular with the oil and gas industry. The industry's historic opposition to the adoption of innovative technology seems to be fading as operators, contractors, and service providers alike continue to invest in innovative solutions around not only digital technologies, but also in process and system optimization techniques. However, while operators are more willing to adopt newer and automated technologies, the "proof of value" burden still falls on service companies. Perceived value to operators may vary slightly, but overall, the industry has focused on two core tenants of value: Increased safety and efficiencyPersonnel reduction For widespread adoption of an enhanced digital solution, the technology must not only provide quantifiable value in at least one of the core tenants, but also must repeatably demonstrate the value in the field. The case study presented demonstrates the value added by introducing a new proprietary Programmable Logic Controller (PLC) based solution into the tubular running process. This system allows for tong operation, elevator and slip function, and single joint elevator (SJE) operation to be performed by a single person, rather than three or four personnel crew, as traditionally employed during tubular running operations. All functions are intelligently executed from a triple certified hazardous zone rated wireless tablet by a single operator's command while located inside the driller's cabin. Through the deployment of a new consolidated and intelligent control system, the rig was able to reduce the number of personnel typically required for casing run and rack back operations down to two operators per tower, which equates to as much as a 66% reduction in personnel needed for tubular running operations. Additionally, the system allowed the operator to control the equipment from inside the driller's cabin, which improved communications and reduced red zone exposure by 30% while increasing run time efficiency by as much as 11% on some connection strings.
Style APA, Harvard, Vancouver, ISO itp.
9

Hoog, Sven, Joachim Berger, Johannes Myland, Günther F. Clauss, Daniel Testa i Florian Sprenger. "The Mooring Bay Concept for LNG Loading in Harsh and Ice Conditions". W ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/omae2012-83841.

Pełny tekst źródła
Streszczenie:
The demand for natural gas from offshore fields is continuously increasing. Especially future production from Arctic waters comes into focus in context with global warming effects leading to the development of a dedicated technology. Relevant approaches work with floating turret moored production terminals (FLNG) receiving gas via flexible risers from subsea or onshore fields. These terminals provide on-board gas treatment and liquefaction facilities as well as huge storage capabilities for LNG (Liquefied Natural Gas), LPG (Liquefied Petrol Gases) and condensate. Products are transferred to periodically operating shuttle tankers for onshore supply reducing the need for local onshore processing plants providing increased production flexibility (future movability or adaptation of capacity). Nevertheless, in case of harsh environmental conditions or ice coverage the offshore transfer of cryogenic liquids between the terminal and the tankers becomes a major challenge. In the framework of the joint research project MPLS20 ([1]), an innovative offshore mooring and cargo transfer system has been developed and analyzed. MPLS20 is developed by the project partners Nexans ([2]) and Brugg ([3]), leading manufacturers of vacuum insulated, flexible cryogenic transfer pipes, IMPaC ([4]), an innovative engineering company that has been involved in many projects for the international oil and gas industry for more than 25 years and the Technical University (TU) Berlin, Department of Land- and Sea Transportation Systems (NAOE, [5]), with great expertise in numerical analyses and model tests. The overall system is based on IMPaC’s patented and certified offshore ‘Mooring Bay’ concept allowing mooring of the vessels in tandem configuration and simultaneous handling and operation of up to six flexible transfer pipes in full aerial mode. The concept is outlined to operate with flexible transfer lines with 16-inch inner diameter like the newly designed and certified corrugated pipes from Nexans and Brugg. The mooring concept and its major subsystems have proven their operability by means of extensive numerical analysis, model tests and a professional ship handling simulator resulting in an overall transfer solution suitable to be used especially under Arctic conditions like addressed by the EU joint research project ACCESS (http://access-eu.org/). The paper introduces the new offshore LNG transfer system and focuses especially on its safe and reliable operability in the Arctic — with ice coverage as well as in open water conditions.
Style APA, Harvard, Vancouver, ISO itp.
10

Schwind, Nicolas, Katsumi Inoue, Sébastien Konieczny, Jean-Marie Lagniez i Pierre Marquis. "What Has Been Said? Identifying the Change Formula in a Belief Revision Scenario". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/258.

Pełny tekst źródła
Streszczenie:
We consider the problem of identifying the change formula in a belief revision scenario: given that an unknown announcement (a formula mu) led a set of agents to revise their beliefs and given the prior beliefs and the revised beliefs of the agents, what can be said about mu? We show that under weak conditions about the rationality of the revision operators used by the agents, the set of candidate formulae has the form of a logical interval. We explain how the bounds of this interval can be tightened when the revision operators used by the agents are known and/or when mu is known to be independent from a given set of variables. We also investigate the completeness issue, i.e., whether mu can be exactly identified. We present some sufficient conditions for it, identify its computational complexity, and report the results of some experiments about it.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Hash Join Operator"

1

Lynk, John. PR-610-163756-WEB Material Strength Verification. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), kwiecień 2019. http://dx.doi.org/10.55274/r0011573.

Pełny tekst źródła
Streszczenie:
DATE: Tuesday, April 30, 2019 TIME: 11:00 a.m. ET CLICK THE DOWNLOAD/BUY BUTTON TO ACCESS THE WEBINAR REGISTRATION LINK Join the PRCI Integrity and Inspection technical committee for a pipeline operator driven discussion regarding PRCI research related to non-destructive technologies for the purpose of pipe material verification and how operators have applied this research in the field. This webinar will include; research project overview, operator case studies and analysis of current technology gaps. Panelists: Mark Piazza, Manager Pipeline Compliance and R and D, Colonial Pipeline Company Mike Kern, Director of Gas Transmission Engineering, National Grid Oliver Burkinshaw, Senior Materials Engineer, ROSEN Simon Bellemare, Founder and CEO of Massachusetts Materials Technologies John Lynk, Program Manager, Integrity and Inspection and Subsea Technical Committees, PRCI Expected Benefits/Learning Outcomes: - In-ditch non-destructive evaluation for material yield strength that has been utilized on in-service lines to confirm incomplete records of pipe grades and/or to evaluate acquired assets - How the data has been utilized to collect opportunistic data as part of external corrosion direct assessments to provide a basis for maximum allowable operating pressure, as well as prioritizing and setting criteria for further inspection and potential capital projects. - The ability to differentiate specific manufacturing processes, such as low frequency and high frequency electro-resistance welded longitudinal seams, have been successfully applied on a number of pipeline integrity projects - Enhancement of inline inspection technologies combined with verification digs have demonstrated the potential to apply pipe joint specific strength data in fitness-for-service, as opposed to lower minimum values set by pipe grade or by nominal conservative assumptions. Who should attend: - Pipeline integrity engineers, specialists and management - Pipe materials specialists Recommended pre-reading: PR-610-163756-R01 Hardness Stength and Ductility (HSD) Testing of Line Pipes Initial Validation Testing Phase I PR-335-173816-MV Validation of insitu Methods for Material Property Determination CLICK THE DOWNLOAD/BUY BUTTON TO ACCESS THE WEBINAR REGISTRATION LINK Not able to attend? Register anyway to automatically receive a link to the webinar recording to view on-demand at your convenience.
Style APA, Harvard, Vancouver, ISO itp.
2

A. Komnos, Georgios, Antonios Papadopoulos, Efstratios Athanaselis, Theofilos Karachalios i Sokratis E. Varitimidis. Migrating Periprosthetic Infection from a Total Hip Replacement to a Contralateral Non-Operated Osteoarthritic Knee Joint. Science Repository, styczeń 2023. http://dx.doi.org/10.31487/j.ijscr.2022.03.02.

Pełny tekst źródła
Streszczenie:
Introduction: There is a paucity of published data on whether a treated infected arthroplasty is a risk factor for infection in another, non-operated joint. Contamination of a primary, arthritic, non-operated joint from an infected arthroplasty is a relatively rare entity. Case: We report a case of migration of a pathogen (Enterococcus faecalis) from an infected prosthetic joint (hip) to the contralateral native joint (knee). Identification of the pathogen was made with PCR, by obtaining cultures during the implantation of the primary knee prosthesis. Conclusion: Contamination of a primary, arthritic, non-operated joint from an infected arthroplasty has not been widely reported. Management of such cases is extremely challenging and without clear and established guidelines. Our experience shows that tissue samples should be taken intraoperatively and sent for cultures, so as to exclude contamination in those cases.
Style APA, Harvard, Vancouver, ISO itp.
3

Kiefner. L52274 Survey and Interpretive Review of Operator Practices for Damage Prevention. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), kwiecień 2007. http://dx.doi.org/10.55274/r0010387.

Pełny tekst źródła
Streszczenie:
Pipeline incident data show that about 22 percent of the pipeline incidents reported to the DOT for the period from 1995 through 2003 was caused by excavation damage. Only incidents from all forms of corrosion account for a higher proportion of the incidents, and the corrosion proportion is around 24 percent. This work presents the results of a survey and interpretive review of pipeline operator practices for the prevention of damage to pipelines from mechanical excavating equipment. A recent joint industry-DOT project has produced a compendium of practices and technologies for preventing damage to pipelines. The work described herein represents a follow-on effort to determine how effective these practices and technologies have been with respect to preventing damage. The project was aimed at determining which techniques are effective, which are not effective, and which would be worth further investigation.
Style APA, Harvard, Vancouver, ISO itp.
4

Koduru, Smitha. PR-244-173856-WEB ILI Crack Tool Reliability and Performance Evaluation. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), wrzesień 2019. http://dx.doi.org/10.55274/r0011617.

Pełny tekst źródła
Streszczenie:
Wednesday, October 2, 2019 11:00 a.m. ET PRESENTER: Smitha Koduru, PhD, C-FER Technologies HOST: Steven Bott, Enbridge MODERATOR: John Lynk, PRCI CLICK THE DOWNLOAD/BUY BUTTON TO ACCESS TO THE WEBINAR REGISTRATION LINK Join the PRCI Integrity and Inspection Technical Committee as they present an expansion of previous PRCI research related to ILI performance data. The new research has been expanded to include experience with UT and EMAT in-line inspection data aligned with in-the-ditch NDE results. Also included are improved statistical characterization of crack inline inspection performance; increasing the reliable application of crack ILI to manage cracking and SCC recommendations for in-the-ditch NDE; and information collected to maximize the ability of operators to measure crack ILI performance. Learning outcomes/benefits of attending this webinar: - Learn about the data sets featured in the industry-wide database for crack features identified with in-line inspection tools (ILI) and/or field non-destructive examination (NDE). - Know the influence of pipe attributes, such as seam weld type, and NDE performance on the crack detection and sizing performance assessment of ILI tools - Understand the methods required to use data from multiple ILI runs and field measurements for increased confidence in crack detection and sizing - Recognize the value of collecting full crack profile data for integrity management Who should attend? - Integrity personnel, analyst, engineers and management - Inline inspection vendor personnel Recommended pre-reading: PR-244-173856-R01 In-line Inspection Crack Tool Reliability and Performance Evaluation Not able to attend? Register anyway to automatically receive a link to the webinar recording to view on-demand at your convenience. Attendance is limited to the first 500 registrants to join the webinar. All remaining registrants will receive a link to view the webinar recording. After registering, you will receive a confirmation email containing information about joining the webinar. Please click here to view more webinars that may be of interest to you!
Style APA, Harvard, Vancouver, ISO itp.
5

Beshouri, Greg. PR-309-14212-WEB Field Demonstration of Fully Integrated NSCR System. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), wrzesień 2019. http://dx.doi.org/10.55274/r0011623.

Pełny tekst źródła
Streszczenie:
Wednesday, October 9, 2019 3:30 pm. ET PRESENTER: Bob Goffin, Advanced Engine Technologies HOST: Chris Nowak, Kinder Morgan MODERATOR: Gary Choquette, PRCI CLICK THE DOWNLOAD/BUY BUTTON TO ACCESS THE WEBINAR REGISTRATION LINK While superficially a "simple and proven" technology, non-selective catalytic reduction (NSCR) control is in fact extremely complex, far more complex than the control of lean burn engines. Using a systems approach, PRCI research partners defined the most common failure modes for each of the components of the NSCR system. Both regulators and operators often make simplistic assumptions regarding the reliability and robustness of NSCR control. Real world experience has shown those assumptions to be unfounded. Legacy NSCR systems can go "out of compliance" resulting in gross emissions deviations while remaining "in control." This webinar will review the reasons for those deviations and then postulates a system design capable of remaining both "in control" and "in compliance." This system was then designed, developed, installed and tested. The results confirmed the theoretical analysis resulting in satisfactory system performance. The result offers regulators and operators guidelines on procuring and/or developing NSCR systems that will satisfy regulatory expectations. Learning outcomes/Benefits of attending include: - Explains for legacy rich burn engines can be upgraded with NSCR and advanced controls - Explores the instrumentation required - Looks at control algorithms involved Who should attend: - Pipeline operators - Reliability engineers and technicians - Emissions compliance specialists Recommended pre-reading: PR-309-14212-R01 Field Demonstration of Fully Integrated NSCR System Not able to attend? Register anyway to automatically receive a link to the webinar recording to view on-demand at your convenience. Attendance is limited to the first 500 registrants to join the webinar. All remaining registrants will receive a link to view the webinar recording. After registering, you will receive a confirmation email containing information about joining the webinar.
Style APA, Harvard, Vancouver, ISO itp.
6

Tucker. L51728 Feasibility of a Pipeline Field Weld Real-Time Radiography (Radioscopy) Inspection System. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), styczeń 1995. http://dx.doi.org/10.55274/r0010117.

Pełny tekst źródła
Streszczenie:
Inspection of pipeline field girth welds during pipeline construction is accomplished by film radiographic: methods. Film radiography of materials is a 70 year old technology. There have been many advances in that 70 year history in equipment and films, but the process of making the radiograph is essentially the same. The film radiography process is time-consuming, costly, environmentally impacting and very operator (inspector) dependent. There are recent and almost daily advances in technologies using x-ray imaging other than film. Double-jointed pipe welds at pipe mills and at double-joint operations have been inspected with stationary real-time radioscopic systems for many years. This electronic imaging technology, known as "�radioscopy"�, has the potential to significantly improve pipeline project schedules and cost by eliminating some of the shortcomings of film radiography. Radioscopy is currently accepted for use by many nationally accepted standards including API-SL, Specification for Line Pipe, and API-1104, Welding of Pipelines and Related Facilities. Most of the real-time systems in use today are fixed installations in pipe mills, foundries or fabrication shops. The ability to produce the required image sensitivity with real-time has been established by these fixed installations. These systems have proven to be very cost effective. In the course of conducting this study, QCC attended several conferences, including the International Society for Optical Engineering (SPIE) Conference in Boston, contacted several hundred potential vendors of radioscopic and radiographic equipment, witnessed demonstrations on existing radioscopic imaging systems and conducted several breadboard system demonstrations. The enclosed exhibit section contains a list of vendors that have products applicable to a radioscopic system.
Style APA, Harvard, Vancouver, ISO itp.
7

Wehr, Tobias, red. EarthCARE Mission Requirements Document. European Space Agency, listopad 2006. http://dx.doi.org/10.5270/esa.earthcare-mrd.2006.

Pełny tekst źródła
Streszczenie:
ESA's EarthCARE (Cloud, Aerosol and Radiation Explorer) mission - scheduled to be launched in 2024 - is the largest and most complex Earth Explorer to date and will advance our understanding of the role that clouds and aerosols play in reflecting incident solar radiation back into space and trapping infrared radiation emitted from Earth's surface. The mission is being implemented in cooperation with JAXA (Japan Aerospace Exploration Agency). It carries four scientific instruments. The Atmospheric Lidar (ATLID), operating at 355 nm wavelength and equipped with a high-spectral resolution and depolarisation receiver, measures profiles of aerosols and thin clouds. The Cloud Profiling Radar (CPR, contribution of JAXA), operates at 94 GHz to measure clouds and precipitation, as well as vertical motion through its Doppler functionality. The Multi-Spectral Imager provides across-track information of clouds and aerosols. The Broad-Band Radiometer (BBR) measures the outgoing reflected solar and emitted thermal radiation in order to derive broad-band radiative fluxes at the top of atmosphere. The Mission Requirement Document defines the scientific mission objectives and observational requirements of EarthCARE. The document has been written by the ESA-JAXA Joint Mission Advisory Group for EarthCARE.
Style APA, Harvard, Vancouver, ISO itp.
8

Salter i Weston. L51534 A Study of New Joining Processes for Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), styczeń 1987. http://dx.doi.org/10.55274/r0010083.

Pełny tekst źródła
Streszczenie:
Over many decades it has been accepted that the most economical way to produce a pipeline is to join together the standard lengths of pipe as quickly as possible, using a highly mobile task force of welders and other technicians, leaving tie-ins, crossings, etc. to smaller specialist crews. The work pattern which evolved almost invariably involved several crews of welders strung out along the pipelines, progress being controlled by the rate at which the leading pair could complete the weld root. The spread from this first crew to final inspection could be a considerable distance, acceptable on land but not acceptable offshore (a rapidly increasing need which reached a peak in the 1970's). This operation, involving costly lay barges, demanded even higher throughput rates to be achieved from a more compact working spread. In common with most manufacturing technologies, there was an increasing dissatisfaction with a system which relied entirely on the skill of a limited number of highly paid men who had little incentive to change their working practices. Increasingly there came reports of the development of new approaches to joining line-pipe, ranging from the mechanization of arc welding to entirely different forms of joining, for example, electron beam welding or mechanical joining. The investment in some of these developments is reported to be several million dollars. The review of present pipelining practice shows that only a handful has been put to practical use, and in the western world, probably only one, an arc welding variant, has been used to produce more than a few hundred miles of pipeline. The information available on these developments is sparse and is scattered amongst a range of companies and research agencies. A literature review and research study to collect together as much of this information as is available, assemble it into a coherent and useable form and identify those developments which show the most promise to fulfill future needs. The main body of the report, which reviews development of the welding processes has been divided into three main joining categories, Fusion Welding, Forge Welding and Mechanical Interference Joining. Within each category each process is considered separately in terms of process principles, general applications, application to pipeline welding, equipment for pipe welding, consumables, process tolerance and skill requirements, weld quality and inspection, process economics, limitations and future developments. This study and comprehensive report compares the economics of the various alternatives. For each process an estimate has been made of the procedural and development costs involved as well as personnel needs and likely production rates.
Style APA, Harvard, Vancouver, ISO itp.
9

Lazonick, William, Philip Moss i Joshua Weitz. The Unmaking of the Black Blue-Collar Middle Class. Institute for New Economic Thinking Working Paper Series, maj 2021. http://dx.doi.org/10.36687/inetwp159.

Pełny tekst źródła
Streszczenie:
In the decade after the Civil Rights Act of 1964, African Americans made historic gains in accessing employment opportunities in racially integrated workplaces in U.S. business firms and government agencies. In the previous working papers in this series, we have shown that in the 1960s and 1970s, Blacks without college degrees were gaining access to the American middle class by moving into well-paid unionized jobs in capital-intensive mass production industries. At that time, major U.S. companies paid these blue-collar workers middle-class wages, offered stable employment, and provided employees with health and retirement benefits. Of particular importance to Blacks was the opening up to them of unionized semiskilled operative and skilled craft jobs, for which in a number of industries, and particularly those in the automobile and electronic manufacturing sectors, there was strong demand. In addition, by the end of the 1970s, buoyed by affirmative action and the growth of public-service employment, Blacks were experiencing upward mobility through employment in government agencies at local, state, and federal levels as well as in civil-society organizations, largely funded by government, to operate social and community development programs aimed at urban areas where Blacks lived. By the end of the 1970s, there was an emergent blue-collar Black middle class in the United States. Most of these workers had no more than high-school educations but had sufficient earnings and benefits to provide their families with economic security, including realistic expectations that their children would have the opportunity to move up the economic ladder to join the ranks of the college-educated white-collar middle class. That is what had happened for whites in the post-World War II decades, and given the momentum provided by the dominant position of the United States in global manufacturing and the nation’s equal employment opportunity legislation, there was every reason to believe that Blacks would experience intergenerational upward mobility along a similar education-and-employment career path. That did not happen. Overall, the 1980s and 1990s were decades of economic growth in the United States. For the emerging blue-collar Black middle class, however, the experience was of job loss, economic insecurity, and downward mobility. As the twentieth century ended and the twenty-first century began, moreover, it became apparent that this downward spiral was not confined to Blacks. Whites with only high-school educations also saw their blue-collar employment opportunities disappear, accompanied by lower wages, fewer benefits, and less security for those who continued to find employment in these jobs. The distress experienced by white Americans with the decline of the blue-collar middle class follows the downward trajectory that has adversely affected the socioeconomic positions of the much more vulnerable blue-collar Black middle class from the early 1980s. In this paper, we document when, how, and why the unmaking of the blue-collar Black middle class occurred and intergenerational upward mobility of Blacks to the college-educated middle class was stifled. We focus on blue-collar layoffs and manufacturing-plant closings in an important sector for Black employment, the automobile industry from the early 1980s. We then document the adverse impact on Blacks that has occurred in government-sector employment in a financialized economy in which the dominant ideology is that concentration of income among the richest households promotes productive investment, with government spending only impeding that objective. Reduction of taxes primarily on the wealthy and the corporate sector, the ascendancy of political and economic beliefs that celebrate the efficiency and dynamism of “free market” business enterprise, and the denigration of the idea that government can solve social problems all combined to shrink government budgets, diminish regulatory enforcement, and scuttle initiatives that previously provided greater opportunity for African Americans in the government and civil-society sectors.
Style APA, Harvard, Vancouver, ISO itp.
10

Lacerda Silva, P., G. R. Chalmers, A. M. M. Bustin i R. M. Bustin. Gas geochemistry and the origins of H2S in the Montney Formation. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329794.

Pełny tekst źródła
Streszczenie:
The geology of the Montney Formation and the geochemistry of its produced fluids, including nonhydrocarbon gases such as hydrogen sulfide were investigated for both Alberta and BC play areas. Key parameters for understanding a complex petroleum system like the Montney play include changes in thickness, depth of burial, mass balance calculations, timing and magnitudes of paleotemperature exposure, as well as kerogen concentration and types to determine the distribution of hydrocarbon composition, H2S concentrations and CO2 concentrations. Results show that there is first-, second- and third- order variations in the maturation patterns that impact the hydrocarbon composition. Isomer ratio calculations for butane and propane, in combination with excess methane estimation from produced fluids, are powerful tools to highlight effects of migration in the hydrocarbon distribution. The present-day distribution of hydrocarbons is a result of fluid mixing between hydrocarbons generated in-situ with shorter-chained hydrocarbons (i.e., methane) migrated from deeper, more mature areas proximal to the deformation front, along structural elements like the Fort St. John Graben, as well as through areas of lithology with higher permeability. The BC Montney play appears to have hydrocarbon composition that reflects a larger contribution from in-situ generation, while the Montney play in Alberta has a higher proportion of its hydrocarbon volumes from migrated hydrocarbons. Hydrogen sulphide is observed to be laterally discontinuous and found in discrete zones or pockets. The locations of higher concentrations of hydrogen sulphide do not align with the sulphate-rich facies of the Charlie Lake Formation but can be seen to underlie areas of higher sulphate ion concentrations in the formation water. There is some alignment between CO2 and H2S, particularly south of Dawson Creek; however, the cross-plot of CO2 and H2S illustrates some deviation away from any correlation and there must be other processes at play (i.e., decomposition of kerogen or carbonate dissolution). The sources of sulphur in the produced H2S were investigated through isotopic analyses coupled with scanning electron microscopy, energy dispersive spectroscopy, and mineralogy by X-ray diffraction. The Montney Formation in BC can contain small discrete amounts of sulphur in the form of anhydrite as shown by XRD and SEM-EDX results. Sulphur isotopic analyses indicate that the most likely source of sulphur is from Triassic rocks, in particular, the Charlie Lake Formation, due to its close proximity, its high concentration of anhydrite (18-42%), and the evidence that dissolved sulphate ions migrated within the groundwater in fractures and transported anhydrite into the Halfway Formation and into the Montney Formation. The isotopic signature shows the sulphur isotopic ratio of the anhydrite in the Montney Formation is in the same range as the sulphur within the H2S gas and is a lighter ratio than what is found in Devonian anhydrite and H2S gas. This integrated study contributes to a better understanding of the hydrocarbon system for enhancing the efficiency of and optimizing the planning of drilling and production operations. Operators in BC should include mapping of the Charlie Lake evaporites and structural elements, three-dimensional seismic and sulphate ion concentrations in the connate water, when planning wells, in order to reduce the risk of encountering unexpected souring.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii