Academic literature on the topic 'GPU pipeline'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'GPU pipeline.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "GPU pipeline":

1

Magro, A., K. Zarb Adami, and J. Hickish. "GPU-Powered Coherent Beamforming." Journal of Astronomical Instrumentation 04, no. 01n02 (June 2015): 1550002. http://dx.doi.org/10.1142/s2251171715500026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graphics processing units (GPU)-based beamforming is a relatively unexplored area in radio astronomy, possibly due to the assumption that any such system will be severely limited by the PCIe bandwidth required to transfer data to the GPU. We have developed a CUDA-based GPU implementation of a coherent beamformer, specifically designed and optimized for deployment at the BEST-2 array which can generate an arbitrary number of synthesized beams for a wide range of parameters. It achieves [Formula: see text] TFLOPs on an NVIDIA Tesla K20, approximately 10x faster than an optimized, multithreaded CPU implementation. This kernel has been integrated into two real-time, GPU-based time-domain software pipelines deployed at the BEST-2 array in Medicina: a standalone beamforming pipeline and a transient detection pipeline. We present performance benchmarks for the beamforming kernel as well as the transient detection pipeline with beamforming capabilities as well as results of test observation.
2

Movania, Muhammad Mobeen, and Lin Feng. "A Novel GPU-Based Deformation Pipeline." ISRN Computer Graphics 2012 (December 15, 2012): 1–8. http://dx.doi.org/10.5402/2012/936315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present a new deformation pipeline that is independent of the integration solver used and allows fast rendering of deformable soft bodies on the GPU. The proposed method exploits the transform feedback mechanism of the modern GPU to bypass the CPU read-back, thus, reusing the modified positions and/or velocities of the deformable object in a single pass in real time. The whole process is being carried out on the GPU. Prior approaches have resorted to CPU read-back along with the GPGPU mechanism. In contrast, our approach does not require these steps thus saving the GPU bandwidth for other tasks. We describe our algorithm along with implementation details on the modern GPU and finally conclude with a look at the experimental results. We show how easy it is to integrate any existing integration solver into the proposed pipeline by implementing explicit Euler integration in the vertex shader on the GPU.
3

Vasyliv, О. B., О. S. Titlov, and Т. А. Sagala. "Modeling of the modes of natural gas transportation by main gas pipelines in the conditions of underloading." Oil and Gas Power Engineering, no. 2(32) (December 27, 2019): 35–42. http://dx.doi.org/10.31471/1993-9868-2019-2(32)-35-42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The current state of transit of natural gas through the Ukrainian gas transmission system (GTS) is estimated in the paper. The prerequisites for further reduction of the GTS load in the coming years are considered, in particular in the direction of Europe through the gas measuring station "Orlivka" (south direction), taking into account the construction of alternative bypass gas pipelines. On the basis of the review of literature sources on the problem of efficient operation of gas pipelines under conditions of underloading, a method for determining the capacity and energy consumption of the gas pipeline for a given combination of working gas pumping units (GPU) was developed. The Ananyev-Tiraspol-Izmail gas pipeline at Tarutino-Orlivka section was selected as the object of research. The methodology includes the calculation of the physical properties of gas by its composition, the calculation of gas compression, the calculation of the linear part, the gas flow to the compressor station's own needs, and the calculation of the total power of the gas-pumping units under the specified technological limitations. With the help of the original software developed in the MATLAB programming language, cyclical multivariate calculations of the capacity and energy consumption of the gas pipeline were carried out and the operating modes of the compressor shop were optimized in the load range from 23 ... 60 million m3/day. Optimization criterion is the minimum total capacity of the GPU. Variable parameters at the same time are the speeds of the superchargers, different combination of working GPU, load factor. According to the results of the optimization graphical dependences were constructed: the optimum frequency of the rotor of the supercharger on the performance of the pipeline; changes in power and pressure depending on the performance of the pipeline when operating a different combination of superchargers. Recommendations have been developed to minimize fuel gas costs at the compressor station.
4

Kingyens, Jeffrey, and J. Gregory Steffan. "The Potential for a GPU-Like Overlay Architecture for FPGAs." International Journal of Reconfigurable Computing 2011 (2011): 1–15. http://dx.doi.org/10.1155/2011/514581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a soft processor programming model and architecture inspired by graphics processing units (GPUs) that are well-matched to the strengths of FPGAs, namely, highly parallel and pipelinable computation. In particular, our soft processor architecture exploits multithreading, vector operations, and predication to supply a floating-point pipeline of 64 stages via hardware support for up to 256 concurrent thread contexts. The key new contributions of our architecture are mechanisms for managing threads and register files that maximize data-level and instruction-level parallelism while overcoming the challenges of port limitations of FPGA block memories as well as memory and pipeline latency. Through simulation of a system that (i) is programmable via NVIDIA's high-levelCglanguage, (ii) supports AMD's CTM r5xx GPU ISA, and (iii) is realizable on an XtremeData XD1000 FPGA-based accelerator system, we demonstrate the potential for such a system to achieve 100% utilization of a deeply pipelined floating-point datapath.
5

Wang, Ke Nian, and Hui Min Du. "The FPGA Design and Implementation of Pipeline Image Processing in the GPU System." Applied Mechanics and Materials 380-384 (August 2013): 3807–10. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the GPU system, pipeline image processing is facing the problem that a large amount of data to be processed, complicated processing procedure, more data transmission channels, and etc. All of these lead to low processing speed and large circuit area. This paper proposed a FPGA design of the pipeline image processing in GPU. The design has been implemented by foam extrusion pipeline architecture and validated on Xilinx Virtex XC6VLX550T FPGA. The results show that the consumption of resources is 390726.09 and the speed is 200MHz.
6

Xiang, Yue, Peng Wang, Bo Yu, and Dongliang Sun. "GPU-accelerated hydraulic simulations of large-scale natural gas pipeline networks based on a two-level parallel process." Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 75 (2020): 86. http://dx.doi.org/10.2516/ogst/2020076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The numerical simulation efficiency of large-scale natural gas pipeline network is usually unsatisfactory. In this paper, Graphics Processing Unit (GPU)-accelerated hydraulic simulations for large-scale natural gas pipeline networks are presented. First, based on the Decoupled Implicit Method for Efficient Network Simulation (DIMENS) method, presented in our previous study, a novel two-level parallel simulation process and the corresponding parallel numerical method for hydraulic simulations of natural gas pipeline networks are proposed. Then, the implementation of the two-level parallel simulation in GPU is introduced in detail. Finally, some numerical experiments are provided to test the performance of the proposed method. The results show that the proposed method has notable speedup. For five large-scale pipe networks, compared with the well-known commercial simulation software SPS, the speedup ratio of the proposed method is up to 57.57 with comparable calculation accuracy. It is more inspiring that the proposed method has strong adaptability to the large pipeline networks, the larger the pipeline network is, the larger speedup ratio of the proposed method is. The speedup ratio of the GPU method approximately linearly depends on the total discrete points of the network.
7

Akyüz, Ahmet Oğuz. "High dynamic range imaging pipeline on the GPU." Journal of Real-Time Image Processing 10, no. 2 (September 12, 2012): 273–87. http://dx.doi.org/10.1007/s11554-012-0270-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Wei, Zheng Hua Wang, and Chuan Fu Xu. "A Survey of General Purpose Computation of GPU for Computational Fluid Dynamics." Advanced Materials Research 753-755 (August 2013): 2731–35. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.2731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The graphics processing unit (GPU) has evolved from configurable graphics processor to a powerful engine for high performance computer. In this paper, we describe the graphics pipeline of GPU, and introduce the history and evolution of GPU architecture. We also provide a summary of software environments used on GPU, from graphics APIs to non-graphics APIs. At last, we present the GPU computing in computational fluid dynamics applications, including the GPGPU computing for Navier-Stokes equations methods and the GPGPU computing for Lattice Boltzmann method.
9

Abdellah, Marwan, Ayman Eldeib, and Amr Sharawi. "High Performance GPU-Based Fourier Volume Rendering." International Journal of Biomedical Imaging 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/590727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of itsO(N2log⁡N)time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that areO(N3)computationally complex. Relying on theFourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look likeX-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
10

Cheng, Sining, Huiyan Qu, and Xianjun Chen. "Ray tracing collision detection based on GPU pipeline reorganization." Journal of Physics: Conference Series 1732 (January 2021): 012057. http://dx.doi.org/10.1088/1742-6596/1732/1/012057.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "GPU pipeline":

1

Bexelius, Tobias. "HaGPipe : Programming the graphics pipeline in Haskell." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-6234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

 

In this paper I present the domain specific language HaGPipe for graphics programming in Haskell. HaGPipe has a clean, purely functional and strongly typed interface and targets the whole graphics pipeline including the programmable shaders of the GPU. It can be extended for use with various backends and this paper provides two different ones. The first one generates vertex and fragment shaders in Cg for the GPU, and the second one generates vertex shader code for the SPUs on PlayStation 3. I will demonstrate HaGPipe's many capabilities of producing optimized code, including an extensible rewrite rule framework, automatic packing of vertex data, common sub expression elimination and both automatic basic block level vectorization and loop vectorization through the use of structures of arrays.

2

PESSOA, Saulo Andrade. "Um pipeline para renderização fotorrealística em aplicações de realidade aumentada." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2014-06-12T15:56:58Z (GMT). No. of bitstreams: 2 arquivo3109_1.pdf: 4561002 bytes, checksum: 69f948acb5be69e1e0d72a2957f5208f (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Conselho Nacional de Desenvolvimento Científico e Tecnológico
A habilidade de interativamente mesclar o mundo real com o virtual abriu um leque de novas possibilidades na área de sistemas multimídia. O campo de pesquisa que trata desse problema é chamado de Realidade Aumentada. Em Realidade Aumentada, os elementos virtuais podem aparecer destacados dos objetos reais ou fotorrealisticamente inseridos no mundo real. Dentro desse segundo tipo de aplicação, pode-se citar: ferramentas de auxílio ao projeto de interiores, jogos eletrônicos aumentados e aplicações para visualização de sítios históricos. Na literatura pesquisada existe uma lacuna para ferramentas que auxiliem a criação desse tipo de aplicação. Na tentativa de contornar isso, esta dissertação propõe um pipeline para renderização fotorrealística em aplicações de Realidade Aumentada que leva em consideração aspectos como: a iluminação, as propriedades de refletância dos materiais, o sombreamento, a composição do mundo real com o mundo virtual e os efeitos de câmera. Esse pipeline foi implementado como uma API, permitindo a realização de dois estudos de caso: uma ferramenta de edição de materiais e uma ferramenta de auxílio ao projeto de interiores. Para obter taxas interativas de renderização, os gargalos do pipeline foram implementados em GPU. Os resultados obtidos mostram que o pipeline proposto oferece ganhos consideráveis de realismo com relação à visualização dos objetos virtuais
3

Cui, Xuewen. "Directive-Based Data Partitioning and Pipelining and Auto-Tuning for High-Performance GPU Computing." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/101497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The computer science community needs simpler mechanisms to achieve the performance potential of accelerators, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi), due to their increasing use in state-of-the-art supercomputers. Over the past 10 years, we have seen a significant improvement in both computing power and memory connection bandwidth for accelerators. However, we also observe that the computation power has grown significantly faster than the interconnection bandwidth between the central processing unit (CPU) and the accelerator. Given that accelerators generally have their own discrete memory space, data needs to be copied from the CPU host memory to the accelerator (device) memory before computation starts on the accelerator. Moreover, programming models like CUDA, OpenMP, OpenACC, and OpenCL can efficiently offload compute-intensive workloads to these accelerators. However, achieving the overlapping of data transfers with computation in a kernel with these models is neither simple nor straightforward. Instead, codes copy data to or from the device without overlapping or requiring explicit user design and refactoring. Achieving performance can require extensive refactoring and hand-tuning to apply data transfer optimizations, and users must manually partition their dataset whenever its size is larger than device memory, which can be highly difficult when the device memory size is not exposed to the user. As the systems are becoming more and more complex in terms of heterogeneity, CPUs are responsible for handling many tasks related to other accelerators, computation and data movement tasks, task dependency checking, and task callbacks. Leaving all logic controls to the CPU not only costs extra communication delay over the PCI-e bus but also consumes the CPU resources, which may affect the performance of other CPU tasks. This thesis work aims to provide efficient directive-based data pipelining approaches for GPUs that tackle these issues and improve performance, programmability, and memory management.
Doctor of Philosophy
Over the past decade, parallel accelerators have become increasingly prominent in this emerging era of "big data, big compute, and artificial intelligence.'' In more recent supercomputers and datacenter clusters, we find multi-core central processing units (CPUs), many-core graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi) being used to accelerate many kinds of computation tasks. While many new programming models have been proposed to support these accelerators, scientists or developers without domain knowledge usually find existing programming models not efficient enough to port their code to accelerators. Due to the limited accelerator on-chip memory size, the data array size is often too large to fit in the on-chip memory, especially while dealing with deep learning tasks. The data need to be partitioned and managed properly, which requires more hand-tuning effort. Moreover, performance tuning is difficult for developers to achieve high performance for specific applications due to a lack of domain knowledge. To handle these problems, this dissertation aims to propose a general approach to provide better programmability, performance, and data management for the accelerators. Accelerator users often prefer to keep their existing verified C, C++, or Fortran code rather than grapple with the unfamiliar code. Since 2013, OpenMP has provided a straightforward way to adapt existing programs to accelerated systems. We propose multiple associated clauses to help developers easily partition and pipeline the accelerated code. Specifically, the proposed extension can overlap kernel computation and data transfer between host and device efficiently. The extension supports memory over-subscription, meaning the memory size required by the tasks could be larger than the GPU size. The internal scheduler guarantees that the data is swapped out correctly and efficiently. Machine learning methods are also leveraged to help with auto-tuning accelerator performance.
4

Doran, Andra. "Occlusion culling et pipeline hybride CPU/GPU pour le rendu temps réel de scènes complexes pour la réalité virtuelle mobile." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2131/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le rendu 3D temps réel est devenu ces dernières années un outil indispensable pour tous travaux de modélisation et de maintenance des systèmes mécaniques complexes, pour le développement des jeux sérieux ou ludiques et plus généralement pour toute application de visualisation interactive dans l'industrie, la médecine, l'architecture,. . . Actuellement, c'est le domaine de prédilection des cartes graphiques en raison de leur architecture spécifiquement conçue pour effectuer des rendus 3D rapides, en particulier grâce à leurs unités de discrétisation et de texture dédiées. Cependant, les applications industrielles sont exécutées sur une large gamme d'ordinateurs, hétérogènes en terme de puissance de calcul. Ces machines ne disposent pas toujours de composants matériels haut de gamme, ce qui restreint leur utilisation pour les applications proposant l'affichage de scènes 3D complexes. Les recherches actuelles sont fortement orientées vers des solutions basées sur les capacités de calcul des cartes graphiques modernes, de haute performance. Au contraire, nous ne supposons pas l'existence systématique de telles cartes sur toutes les architectures et proposons donc d'ajuster notre pipeline de rendu à celles-ci afin d'obtenir un rendu efficace. Notre moteur de rendu s'adapte aux capacités de l'ordinateur, tout en prenant en compte chaque unité de calcul, CPU et GPU. Le but est d'équilibrer au mieux la charge de travail des deux unités afin de permettre un rendu temps réel des scènes complexes, même sur des ordinateurs bas de gamme. Ce pipeline est aisément intégrable à tout moteur de rendu classique et ne nécessite aucune étape de précalcul
Nowadays, 3D real-time rendering has become an essential tool for any modeling work and maintenance of industrial equipment, for the development of serious or fun games, and in general for any visualization application in the domains of industry, medical care, architecture,. . . Currently, this task is generally assigned to graphics hardware, due to its specific design and its dedicated rasterization and texturing units. However, in the context of industrial applications, a wide range of computers is used, heterogeneous in terms of computation power. These architectures are not always equipped with high-end hardware, which may limit their use for this type of applications. Current research is strongly oriented towards modern high performance graphics hardware-based solutions. On the contrary, we do not assume the existence of such hardware on all architectures. We propose therefore to adapt our pipeline according to the computing architecture in order to obtain an efficient rendering. Our pipeline adapts to the computer's capabilities, taking into account each computing unit, CPU and GPU. The goal is to provide a well-balanced load on the two computing units, thus ensuring a real-time rendering of complex scenes, even on low-end computers. This pipeline can be easily integrated into any conventional rendering system and does not require any precomputation step
5

Crassin, Cyril. "GigaVoxels : un pipeline de rendu basé Voxel pour l'exploration efficace de scènes larges et détaillées." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00650161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse, nous présentons une nouvelle approche efficace pour le rendu de scènes vastes et d'objets détaillés en temps réel. Notre approche est basée sur une nouvelle représentation pré-filtrée et volumique de la géométrie et un lancer de cone basé-voxel qui permet un rendu précis et haute performance avec une haute qualité de filtrage de géométries très détaillées. Afin de faire de cette représentation voxel une primitive de rendu standard pour le temps-réel, nous proposons une nouvelle approche basée sur les GPUs conçus entièrement pour passer à l'échelle et supporter ainsi le rendu des volumes de données très volumineux. Notre système permet d'atteindre des performances de rendu en temps réel pour plusieurs milliards de voxels. Notre structure de données exploite le fait que dans les scènes CG, les détails sont souvent concentrées sur l'interface entre l'espace libre et des grappes de densité et montre que les modèles volumétriques pourrait devenir une alternative intéressante en tant que rendu primitif pour les applications temps réel. Dans cet esprit, nous permettons à un compromis entre qualité et performances et exploitons la cohérence temporelle. Notre solution est basée sur une représentation hiérarchiques des données adaptées en fonction de la vue actuelle et les informations d'occlusion, couplé à un algorithme de rendu par lancer de rayons efficace. Nous introduisons un mécanisme de cache pour le GPU offrant une pagination très efficace de données dans la mémoire vidéo et mis en œuvre comme un processus data-parallel très efficace. Ce cache est couplé avec un pipeline de production de données capable de charger dynamiquement des données à partir de la mémoire centrale, ou de produire des voxels directement sur le GPU. Un élément clé de notre méthode est de guider la production des données et la mise en cache en mémoire vidéo directement à partir de demandes de données et d'informations d'utilisation émises directement lors du rendu. Nous démontrons notre approche avec plusieurs applications. Nous montrons aussi comment notre modèle géométrique pré-filtré et notre lancer de cones approximé peuvent être utilisés pour calculer très efficacement divers effets de flou ainsi d'éclairage indirect en temps réel.
6

Schertzer, Jérémie. "Exploiting modern GPUs architecture for real-time rendering of massive line sets." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette thèse, nous considérons des grands ensembles de lignes générés à partir de tractogrammes cérébraux. Ils décrivent des connexions neuronales représentées par des millions de fibres poly-lignes, comptant des milliards de segments. Grâce au mesh shader pipeline, nous construisons un moteur de rendu de tractogrammes aux performances surpassant l'état de l'art de deux ordres de grandeur.Nos performances proviennent des fiblets : une représentation compressée de blocs de segments. En combinant cohérence temporelle et dilatation morphologique du z-buffer, nous définissons un test d'occlusion rapide pour l'élimination de fiblets. Grâce à notre algorithme de décompression parallèle fortement optimisé, les fiblets survivants sont efficacement synthétisés en poly-lignes. Nous montrons également comment notre pipeline de fiblets accélère des fonctionnalités d'interactions avancées avec les tractogrammes.Pour le cas général du rendu des lignes, nous proposons la marche morphologique : une technique en espace écran qui rend des tubes d'épaisseur modifiable à partir des lignes fines rastérisées du G-buffer. En approximant un tube comme l'union de sphères densément réparties le long de ses axes, chaque sphère occupant chaque pixel est récupérée au moyen d'un filtre multi-passes de propagation de voisinage. Accéléré par le compute pipeline, nous atteignons des performances temps réel pour le rendu de lignes épaisses.Pour conclure notre travail, nous implémentons un prototype de réalité virtuelle combinant fiblets et marche morphologique. Il permet pour la première fois la visualisation immersive de grands tractogrammes constitués de fibres épaisses, ouvrant ainsi la voie à des perspectives diverses
In this thesis, we consider massive line sets generated from brain tractograms. They describe neural connections that are represented with millions of poly-line fibers, summing up to billions of segments. Thanks to the two-staged mesh shader pipeline, we build a tractogram renderer surpassing state-of-the-art performances by two orders of magnitude.Our performances come from fiblets: a compressed representation of segment blocks. By combining temporal coherence and morphological dilation on the z-buffer, we define a fast occlusion culling test for fiblets. Thanks to our heavily-optimized parallel decompression algorithm, surviving fiblets are swiftly synthesized to poly-lines. We also showcase how our fiblet pipeline speeds-up advanced tractogram interaction features.For the general case of line rendering, we propose morphological marching: a screen-space technique rendering custom-width tubes from the thin rasterized lines of the G-buffer. By approximating a tube as the union of spheres densely distributed along its axes, each sphere shading each pixel is retrieved relying on a multi-pass neighborhood propagation filter. Accelerated by the compute pipeline, we reach real-time performances for the rendering of depth-dependant wide lines.To conclude our work, we implement a virtual reality prototype combining fiblets and morphological marching. It makes possible for the first time the immersive visualization of huge tractograms with fast shading of thick fibers, thus paving the way for diverse perspectives
7

He, Yiyang. "A Physically Based Pipeline for Real-Time Simulation and Rendering of Realistic Fire and Smoke." Thesis, Stockholms universitet, Numerisk analys och datalogi (NADA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-160401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the rapidly growing computational power of modern computers, physically based rendering has found its way into real world applications. Real-time simulations and renderings of fire and smoke had become one major research interest in modern video game industry, and will continue being one important research direction in computer graphics. To visually recreate realistic dynamic fire and smoke is a complicated problem. Furthermore, to solve the problem requires knowledge from various areas, ranged from computer graphics and image processing to computational physics and chemistry. Even though most of the areas are well-studied separately, when combined, new challenges will emerge. This thesis focuses on three aspects of the problem, dynamic, real-time and realism, to propose a solution in form of a GPGPU pipeline, along with its implementation. Three main areas with application in the problem are discussed in detail: fluid simulation, volumetric radiance estimation and volumetric rendering. The weights are laid upon the first two areas. The results are evaluated around the three aspects, with graphical demonstrations and performance measurements. Uniform grids are used with Finite Difference (FD) discretization scheme to simplify the computation. FD schemes are easy to implement in parallel, especially with ComputeShader, which is well supported in Unity engine. The whole implementation can easily be integrated into any real-world applications in Unity or other game engines that support DirectX 11 or higher.
8

Angalev, Mikhail. "Energy saving at gas compressor stations through the use of parametric diagnostics." Thesis, KTH, Energiteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Increasingly growing consumption of natural gas all around the world requires development of new transporting equipment and optimization of existing pipelines and gas pumping facilities. As a special case, Russian gas pumping system has the longest pipes with large diameter, which carry great amounts of natural gas. So, as reconstruction and modernization needs large investments, a need of more effective and low cost tool appeared. As a result diagnostics became the most wide-spread method for lifecycle assessment, and lifecycle extension for gas pumping units and pipelines.One of the most effective method for diagnostics of gas pumping units is parametric diagnostics. It is based on evaluation of measurement of several termo-gas dynamic parameters of gas pumping units, such as pressures, temperatures and rotational speed of turbines and compressors.In my work I developed and examined a special case of parametric diagnostics – methodic for evaluation of technical state and output parameters for gas pumping unit “Ural-16”. My work contains detailed analysis of various defects, classified by different GPU’s systems. The results of this analysis are later used in development of the methodic for calculation of output parameters for gas pumping unit.GPU is an extremely complex object for diagnostics. Around 200 combinations of Gas Turbine engines with centrifugal superchargers, different operational conditions and other aspects require development of separate methodic almost for each gas pumping unit type.Development of each methodic is a complex work which requires gathering of all possible parametric and statistical data for the examined gas pumping unit. Also parameters of compressed gas are measured. Thus as a result a number of equations are formed which finally allow to calculate such parameters as efficiency, fuel gas consumption and technical state coefficient which couldn’t be measured directly by existing measuring equipment installed on the gas compressor station.
9

Sand, Victor. "Dynamic Visualization of Space Weather Simulation Data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The work described in this thesis is part of the Open Space project, a collaboration between Linköping University, NASA and the American Museum of Natural History. The long-term goal of Open Space is a multi-purpose, open-source scientific visualization software. The thesis covers the research and implementation of a pipeline for preparing and rendering volumetric data. The developed pipeline consists of three stages: A data formatting stage which takes data from various sources and prepares it for the rest of the pipeline, a pre-processing stage which builds a tree structure of of the raw data, and finally an interactive rendering stage which draws a volume using ray-casting. The pipeline is a fully working proof-of-concept for future development of Open Space, and can be used as-is to render space weather data using a combination of suitable data structures and an efficient data transfer pipeline. Many concepts and ideas from this work can be utilized in the larger-scale software project.
10

Tjia, Andrew Hung Yao. "Adaptive pipelined work processing for GPS trajectories." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Adaptive pipelined work processing is a system paradigm that optimally processes trajectories created by GPS-enabled devices. Systems that execute GPS trajectory processing are often constrained at the client side by limitations of mobile devices such as processing power, energy usage, and network. The server must deal with non-uniform processing workloads and flash crowds generated by surges in popularity. We demonstrate that adaptive processing is a solution to these problems by building a trajectory processing system that uses adaptivity to respond to changing workloads and network conditions, and is fault tolerant. This benefits application designers, who design operations on data instead of manual system optimization and resource management. We evaluate our method by processing a dataset of snow sports trajectories and show that our method is extensible to other operators and other kinds of data.

Books on the topic "GPU pipeline":

1

Board, Canada National Energy. Reasons for decision in the matter of TransCanada Keystone Pipeline GP Ltd: Application dated 23 November 2007 pursuant to sections 58 and 21 of the National Energy Board Act for the Keystone Cushing Expansion Project. Calgary, AB: The Board, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Board, Canada National Energy. Reasons for decision in the matter of TransCanada Keystone Pipeline GP Ltd: Application dated 23 November 2007 pursuant to sections 58 and 21 of the National Energy Board Act for the Keystone Cushing Expansion Project. Calgary, AB: The Board, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

State, United States Department of. Draft supplemental environmental impact statement for the Keystone XL Project: Applicant for Presidential permit: TransCanada Keystone Pipline LP. Washington, DC: United States Dept. of State, Bureau of Oceans and International Environmental and Scientific Affairs, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wybrew-Bond, Ian. Life after the GFU: Norwegian gas under new rules. Cambridge, Mass: CERA, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Power, United States Congress House Committee on Commerce Subcommittee on Energy and. H.R. 3, the Northern Route Approval Act: Hearing before the Subcommittee on Energy and Power of the Committee on Energy and Commerce, House of Representatives, One Hundred Thirteenth Congress, first session, April 10, 2013. Washington: U.S. Government Printing Office, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jing, Liang. Ren min bi guo ji hua "da dong mai": Guo ji huo bi zhi fu ji chu she shi gou jian = Pipeline of RMB internationalization : establishment of payment infrastructures for global currency. 8th ed. Beijing Shi: Jing ji guan li chu ban she, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "GPU pipeline":

1

Cozzi, Patrick, and Daniel Bagnell. "A WebGL Globe Rendering Pipeline." In GPU Pro 360, 213–22. Boca Raton : Taylor & Francis, CRC Press, [2018]: A K Peters/CRC Press, 2018. http://dx.doi.org/10.1201/b22483-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riccio, Christophe, and Sean Lilley. "Introducing the Programmable Vertex Pulling Rendering Pipeline." In GPU Pro 360, 195–211. Boca Raton : Taylor & Francis, CRC Press, [2018]: A K Peters/CRC Press, 2018. http://dx.doi.org/10.1201/b22483-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Haitang, Junchao Ma, Zixia Qiu, Junmei Yao, Mustafa A. Al Sibahee, Zaid Ameen Abduljabbar, and Vincent Omollo Nyangaresi. "Multi-GPU Parallel Pipeline Rendering with Splitting Frame." In Advances in Computer Graphics, 223–35. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-50072-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sabino, Thales Luis, Paulo Andrade, Esteban Walter Gonzales Clua, Anselmo Montenegro, and Paulo Pagliosa. "A Hybrid GPU Rasterized and Ray Traced Rendering Pipeline for Real Time Rendering of Per Pixel Effects." In Lecture Notes in Computer Science, 292–305. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33542-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harada, Takahiro. "Two-Level Constraint Solver and Pipelined Local Batching for Rigid Body Simulation on GPUs." In GPU Pro 360, 223–40. First edition. j Boca Raton, FL : CRC Press/Taylor & Francis Group, 2018. j Includes bibliographical references and index.: A K Peters/CRC Press, 2018. http://dx.doi.org/10.1201/9781351052108-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chauhan, Munesh Singh, Ashish Negi, and Prashant Singh Rana. "Fractal Image Compression Using Dynamically Pipelined GPU Clusters." In Advances in Intelligent Systems and Computing, 575–81. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1602-5_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yi, Zhi Qiao, Spencer Davis, Hai Jiang, and Kuan-Ching Li. "Pipelined Multi-GPU MapReduce for Big-Data Processing." In Computer and Information Science, 231–46. Heidelberg: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00804-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Thambawita, Vajira, Steven A. Hicks, Ewan Jaouen, Pål Halvorsen, and Michael A. Riegler. "Chapter 4 Smittestopp analytics: Analysis of position data." In Simula SpringerBriefs on Computing, 63–79. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05466-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractContact tracing applications generally rely on Bluetooth data. This type of data works well to determine whether a contact occurred (smartphones were close to each other) but cannot offer the contextual information GPS data can offer. Did the contact happen on a bus? In a building? And of which type? Are some places recurrent contact locations? By answering such questions, GPS data can help develop more accurate and better-informed contact tracing applications. This chapter describes the ideas and approaches implemented for GPS data within the Smittestopp contact tracing application.We will present the pipeline used and the contribution of GPS data for contextual information, using inferred transport modes and surrounding POIs, showcasing the opportunities in the use of GPS information. Finally,we discuss ethical and privacy considerations, as well as some lessons learned.
9

Roch, Peter, Bijan Shahbaz Nejad, Marcus Handte, and Pedro José Marrón. "Systematic Optimization of Image Processing Pipelines Using GPUs." In Advances in Visual Computing, 633–46. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64559-5_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deng, Junyong, Libo Chang, Guangxin Huang, Lingzhi Xiao, Tao Li, Lin Jiang, Jungang Han, and Huimin Du. "The Design and Prototype Implementation of a Pipelined Heterogeneous Multi-core GPU." In Communications in Computer and Information Science, 66–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41591-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "GPU pipeline":

1

Pulikesi Mannan, Sai Krishanth, Ewan Douglas, Justin Hom, Ramya M. Anche, John Debes, Isabel Rebollido, and Bin B. Ren. "NMF-based GPU accelerated coronagraphy pipeline." In Techniques and Instrumentation for Detection of Exoplanets XI, edited by Garreth J. Ruane. SPIE, 2023. http://dx.doi.org/10.1117/12.2677739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hestness, Joel, Stephen W. Keckler, and David A. Wood. "GPU Computing Pipeline Inefficiencies and Optimization Opportunities in Heterogeneous CPU-GPU Processors." In 2015 IEEE International Symposium on Workload Characterization (IISWC). IEEE, 2015. http://dx.doi.org/10.1109/iiswc.2015.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"GPU Ray-traced Collision Detection - Fine Pipeline Reorganization." In International Conference on Computer Graphics Theory and Applications. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005299603170324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miyazaki, Makoto, and Susumu Matsumae. "A Pipeline Implementation for Dynamic Programming on GPU." In 2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW). IEEE, 2018. http://dx.doi.org/10.1109/candarw.2018.00063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Haowei, Meng Sun, Siyu Zhang, Dongying Liu, and Tiantian Liu. "GPU Cloth Simulation Pipeline in Lightchaser Animation Studio." In SA '21: SIGGRAPH Asia 2021. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3478512.3488616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Soudarev, A., E. Vinogradov, Yu Zakharov, and A. Leznov. "Environmental Update of Frame-3 Gas-Pumping Units." In 2000 3rd International Pipeline Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/ipc2000-268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Due to its robust design, the General Electric Frame 3 MS3002 gas turbine is highly reliable and has reasonably good maintenance qualities, which explains why it is so widely used all over the world. At present, there are nearly 1000 units in this series, the bulk thereof are operated as gas-pumping units (GPU) to drive natural gas compressors.
7

Tatarchuk, Natalya, Jeremy Shopf, and Christopher DeCoro. "Real-Time Isosurface Extraction Using the GPU Programmable Geometry Pipeline." In ACM SIGGRAPH 2007 courses. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1281500.1361219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Yunfan, Min Huang, Qinghai Miao, Jun Xiao, and Ying Wang. "Architecting the Discontinuous Deformation Analysis Method Pipeline on the GPU." In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dai, Hongwen, Zhen Lin, Chao Li, Chen Zhao, Fei Wang, Nanning Zheng, and Huiyang Zhou. "Accelerate GPU Concurrent Kernel Execution by Mitigating Memory Pipeline Stalls." In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2018. http://dx.doi.org/10.1109/hpca.2018.00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lemeire, Jan, Jan G. Cornelis, and Laurent Segers. "Microbenchmarks for GPU Characteristics: The Occupancy Roofline and the Pipeline Model." In 2016 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP). IEEE, 2016. http://dx.doi.org/10.1109/pdp.2016.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "GPU pipeline":

1

Wilcox. PR-015-09209-R01 Test Facility for Pump Performance Characterization in Viscous Fluids - Phase I. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), April 2010. http://dx.doi.org/10.55274/r0010713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the liquids pipeline industry, large horsepower and flow pumps are used to transport liquid along the pipeline. When these pumps are purchased, they are performance tested with water. Performance tests are also sometimes conducted after a pump has been in operation for some time. The performance of a pump is different with water than with a viscous fluid (crude oil). Therefore, the performance results with water are corrected for viscosity. The Hydraulic Institute (HI) developed viscosity correction factors which are used to correct the pipeline pump performance results. These correction factors are based on test results with pump head and flows up to 430 ft and 1140 gpm, respectively. Pipeline size pumps easily have flows in the range of 10,000 to 50,000 gpm. The correction factors used for pipeline size pumps were derived from the HI lower flow and head correction factors. Therefore, the correction factors have an unknown error for larger flow and head pumps.
2

Stewart. L52283 Ground Positioning Satellite in Conjunctions with Current One-Call System - Virginia. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), November 2007. http://dx.doi.org/10.55274/r0010184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Excavation damage continues to be a leading cause of damage to underground facilities. It was cited as the cause in over 15% of all pipeline incidents in 2006. Effective damage prevention programs are necessary to prevent damages to underground facilities and to ensure public health and safety, environmental protection and continuity of vital services. All stakeholders, including the public, share responsibility for and the benefits of damage prevention. Although much has been done to address excavation damage it continues to be a problem. The Virginia Pilot Project for Incorporating GPS Technology to Enhance One-Call Damage Prevention was undertaken as a "proof-of-concept" project to research and implement new and existing technology to significantly enhance the development and communication of accurate information among stakeholders regarding the exact location of planned excavations. Resulting improvements in the one-call damage prevention process would in turn have a positive impact on damage prevention and the safety and reliability of operations of underground facilities.
3

George, Darin. L52315 Testing of Environmentally-Friendly Gas Sampling Methods. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), February 2009. http://dx.doi.org/10.55274/r0010176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent environmental concerns have led to calls for reduced hydrocarbon emissions to the atmosphere from a variety of sources. One source of emissions being examined in this regard is natural gas spot sampling methods that vent pipeline gases to the atmosphere. Some sampling techniques and equipment have been developed that do not emit greenhouse gases, but the need exists to test these methods for their ability to collect accurate, representative samples. Another related concern is the accuracy of samples drawn from streams near their hydrocarbon dew point (HDP). While the spot sampling methods recommended by current industry standards perform well on streams far above their HDP, little data are available on their performance near or at the HDP, where poor sampling methods can cause heavy hydrocarbons to condense from the sample and distort the analysis. This project evaluated the ability of four natural gas spot sampling methods, including two zero emissions sampling methods, to capture accurate, representative samples of gas streams at or near their hydrocarbon dew point (HDP). Two of the sampling methods tested were variations on the GPA fill-and-empty method, with additional steps intended to heat the sampling equipment above the HDP or clear condensed hydrocarbon liquids from the sample line. The other two sampling methods, which use the A+ Q2 sample cylinder and a constant-pressure floating-piston sample cylinder, were developed to prevent condensation of heavy hydrocarbons during the sampling process.
4

Canto, Patricia, ed. Heterogeneous Social Capitals: A New Window of Opportunity for Local Economies. Universidad de Deusto, 2010. http://dx.doi.org/10.18543/gwvw3770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper we analyze the relationship between hot topics in economic development such as global knowledge pipelines (GKP), tacit knowledge (TK) and social capital (SC). In particular, echoing the work of Gertler (2003) and Bathelt et al. (2004) we stress that GKPs are important not only as conveyors of codified knowledge, but also of TK. In this paper, we make two additional operations; the first is extending the concept of TK to include systematically the concept of SC. Traditionally, TK tends to be conceived as individual experiential knowledge based on practice, in a way part of the human capital embodied in the highly-skilled individual expert. We’d rather include here also collective pools of social knowledge otherwise called SC since TK can be created and later transferred by wider communities. In this operation we benefit from Blacklers (2002) typology of knowledge that appropriately includes aspects of localized SC in the form of ‘encultured’ and ‘embedded knowledge’. In the second operation we extend Williams argument (2007) on the richness of migrants’ codified and tacit knowledge; in fact, we assert that TK flows do not rely only upon highly knowledgeable economic agents such as scientists, engineers and top managers, but on a broader spectrum of individual and collective agents that are and/or can be part of competitive GVC/GPN/GKP. This discussion has special importance for local production systems (LPS) such as clusters and districts, where TK flows and SC are transforming dramatically, thus need more thorough theoretical frameworks to represent these changing socioeconomic scenarios, as well as their real constraints and opportunities.
5

George. PR-015-10600-R01 Proposed Sampling Methods for Supercritical Natural Gas Streams. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), July 2010. http://dx.doi.org/10.55274/r0010981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deepwater natural gas production is a non-traditional operation that is very different than conventional shelf or onshore production, due to the extremely high pressures (2,000 psia, 13.8 MPa abs) and rich gases (1,300 Btu/scf, 48.4 MJ/Nm3) involved. Concerns have been raised about methods used to sample deepwater natural gas supplies in this supercritical state. Sampling methods accepted for natural gas at pipeline conditions have been used to sample gas from offshore platforms and supercritical onshore storage facilities. However, the sample analyses have later been found to overestimate the energy content of the gas by as much as 300 Btu/scf (11.2 MJ/Nm3). Analyses of these samples have also been found to incorrectly estimate other properties of the gas, such as sound speed and density. Due to the potential financial impact of such discrepancies, the need exists to understand their causes, and to identify alternative sampling procedures or methods that can minimize them. A literature search was performed to identify sampling methods with the potential to accurately sample natural gas streams in the supercritical region. The search included methods listed in existing natural gas sampling standards, such as API MPMS Chapter 14.1 and GPA 2166-05, variations and suggested improvements on these standard methods, and sampling methods applied in other sectors of the energy industry. No sampling methods were identified that are designed specifically for sampling supercritical natural gas. However, guidelines were found in various references that are useful in tailoring existing sampling methods or designing new sampling methods for supercritical gas service. These guidelines include means to avoid phase changes in the samples, methods of regulating pressure while maintaining sample temperatures, avoiding issues with adsorption and desorption on equipment, and recommendations for designing a sampling method for high-pressure service.

To the bibliography