Tesi sul tema "GPU pipeline"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: GPU pipeline.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-23 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "GPU pipeline".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Bexelius, Tobias. "HaGPipe : Programming the graphics pipeline in Haskell". Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-6234.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

 

In this paper I present the domain specific language HaGPipe for graphics programming in Haskell. HaGPipe has a clean, purely functional and strongly typed interface and targets the whole graphics pipeline including the programmable shaders of the GPU. It can be extended for use with various backends and this paper provides two different ones. The first one generates vertex and fragment shaders in Cg for the GPU, and the second one generates vertex shader code for the SPUs on PlayStation 3. I will demonstrate HaGPipe's many capabilities of producing optimized code, including an extensible rewrite rule framework, automatic packing of vertex data, common sub expression elimination and both automatic basic block level vectorization and loop vectorization through the use of structures of arrays.

2

PESSOA, Saulo Andrade. "Um pipeline para renderização fotorrealística em aplicações de realidade aumentada". Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2337.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Made available in DSpace on 2014-06-12T15:56:58Z (GMT). No. of bitstreams: 2 arquivo3109_1.pdf: 4561002 bytes, checksum: 69f948acb5be69e1e0d72a2957f5208f (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Conselho Nacional de Desenvolvimento Científico e Tecnológico
A habilidade de interativamente mesclar o mundo real com o virtual abriu um leque de novas possibilidades na área de sistemas multimídia. O campo de pesquisa que trata desse problema é chamado de Realidade Aumentada. Em Realidade Aumentada, os elementos virtuais podem aparecer destacados dos objetos reais ou fotorrealisticamente inseridos no mundo real. Dentro desse segundo tipo de aplicação, pode-se citar: ferramentas de auxílio ao projeto de interiores, jogos eletrônicos aumentados e aplicações para visualização de sítios históricos. Na literatura pesquisada existe uma lacuna para ferramentas que auxiliem a criação desse tipo de aplicação. Na tentativa de contornar isso, esta dissertação propõe um pipeline para renderização fotorrealística em aplicações de Realidade Aumentada que leva em consideração aspectos como: a iluminação, as propriedades de refletância dos materiais, o sombreamento, a composição do mundo real com o mundo virtual e os efeitos de câmera. Esse pipeline foi implementado como uma API, permitindo a realização de dois estudos de caso: uma ferramenta de edição de materiais e uma ferramenta de auxílio ao projeto de interiores. Para obter taxas interativas de renderização, os gargalos do pipeline foram implementados em GPU. Os resultados obtidos mostram que o pipeline proposto oferece ganhos consideráveis de realismo com relação à visualização dos objetos virtuais
3

Cui, Xuewen. "Directive-Based Data Partitioning and Pipelining and Auto-Tuning for High-Performance GPU Computing". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/101497.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The computer science community needs simpler mechanisms to achieve the performance potential of accelerators, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi), due to their increasing use in state-of-the-art supercomputers. Over the past 10 years, we have seen a significant improvement in both computing power and memory connection bandwidth for accelerators. However, we also observe that the computation power has grown significantly faster than the interconnection bandwidth between the central processing unit (CPU) and the accelerator. Given that accelerators generally have their own discrete memory space, data needs to be copied from the CPU host memory to the accelerator (device) memory before computation starts on the accelerator. Moreover, programming models like CUDA, OpenMP, OpenACC, and OpenCL can efficiently offload compute-intensive workloads to these accelerators. However, achieving the overlapping of data transfers with computation in a kernel with these models is neither simple nor straightforward. Instead, codes copy data to or from the device without overlapping or requiring explicit user design and refactoring. Achieving performance can require extensive refactoring and hand-tuning to apply data transfer optimizations, and users must manually partition their dataset whenever its size is larger than device memory, which can be highly difficult when the device memory size is not exposed to the user. As the systems are becoming more and more complex in terms of heterogeneity, CPUs are responsible for handling many tasks related to other accelerators, computation and data movement tasks, task dependency checking, and task callbacks. Leaving all logic controls to the CPU not only costs extra communication delay over the PCI-e bus but also consumes the CPU resources, which may affect the performance of other CPU tasks. This thesis work aims to provide efficient directive-based data pipelining approaches for GPUs that tackle these issues and improve performance, programmability, and memory management.
Doctor of Philosophy
Over the past decade, parallel accelerators have become increasingly prominent in this emerging era of "big data, big compute, and artificial intelligence.'' In more recent supercomputers and datacenter clusters, we find multi-core central processing units (CPUs), many-core graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi) being used to accelerate many kinds of computation tasks. While many new programming models have been proposed to support these accelerators, scientists or developers without domain knowledge usually find existing programming models not efficient enough to port their code to accelerators. Due to the limited accelerator on-chip memory size, the data array size is often too large to fit in the on-chip memory, especially while dealing with deep learning tasks. The data need to be partitioned and managed properly, which requires more hand-tuning effort. Moreover, performance tuning is difficult for developers to achieve high performance for specific applications due to a lack of domain knowledge. To handle these problems, this dissertation aims to propose a general approach to provide better programmability, performance, and data management for the accelerators. Accelerator users often prefer to keep their existing verified C, C++, or Fortran code rather than grapple with the unfamiliar code. Since 2013, OpenMP has provided a straightforward way to adapt existing programs to accelerated systems. We propose multiple associated clauses to help developers easily partition and pipeline the accelerated code. Specifically, the proposed extension can overlap kernel computation and data transfer between host and device efficiently. The extension supports memory over-subscription, meaning the memory size required by the tasks could be larger than the GPU size. The internal scheduler guarantees that the data is swapped out correctly and efficiently. Machine learning methods are also leveraged to help with auto-tuning accelerator performance.
4

Doran, Andra. "Occlusion culling et pipeline hybride CPU/GPU pour le rendu temps réel de scènes complexes pour la réalité virtuelle mobile". Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2131/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le rendu 3D temps réel est devenu ces dernières années un outil indispensable pour tous travaux de modélisation et de maintenance des systèmes mécaniques complexes, pour le développement des jeux sérieux ou ludiques et plus généralement pour toute application de visualisation interactive dans l'industrie, la médecine, l'architecture,. . . Actuellement, c'est le domaine de prédilection des cartes graphiques en raison de leur architecture spécifiquement conçue pour effectuer des rendus 3D rapides, en particulier grâce à leurs unités de discrétisation et de texture dédiées. Cependant, les applications industrielles sont exécutées sur une large gamme d'ordinateurs, hétérogènes en terme de puissance de calcul. Ces machines ne disposent pas toujours de composants matériels haut de gamme, ce qui restreint leur utilisation pour les applications proposant l'affichage de scènes 3D complexes. Les recherches actuelles sont fortement orientées vers des solutions basées sur les capacités de calcul des cartes graphiques modernes, de haute performance. Au contraire, nous ne supposons pas l'existence systématique de telles cartes sur toutes les architectures et proposons donc d'ajuster notre pipeline de rendu à celles-ci afin d'obtenir un rendu efficace. Notre moteur de rendu s'adapte aux capacités de l'ordinateur, tout en prenant en compte chaque unité de calcul, CPU et GPU. Le but est d'équilibrer au mieux la charge de travail des deux unités afin de permettre un rendu temps réel des scènes complexes, même sur des ordinateurs bas de gamme. Ce pipeline est aisément intégrable à tout moteur de rendu classique et ne nécessite aucune étape de précalcul
Nowadays, 3D real-time rendering has become an essential tool for any modeling work and maintenance of industrial equipment, for the development of serious or fun games, and in general for any visualization application in the domains of industry, medical care, architecture,. . . Currently, this task is generally assigned to graphics hardware, due to its specific design and its dedicated rasterization and texturing units. However, in the context of industrial applications, a wide range of computers is used, heterogeneous in terms of computation power. These architectures are not always equipped with high-end hardware, which may limit their use for this type of applications. Current research is strongly oriented towards modern high performance graphics hardware-based solutions. On the contrary, we do not assume the existence of such hardware on all architectures. We propose therefore to adapt our pipeline according to the computing architecture in order to obtain an efficient rendering. Our pipeline adapts to the computer's capabilities, taking into account each computing unit, CPU and GPU. The goal is to provide a well-balanced load on the two computing units, thus ensuring a real-time rendering of complex scenes, even on low-end computers. This pipeline can be easily integrated into any conventional rendering system and does not require any precomputation step
5

Crassin, Cyril. "GigaVoxels : un pipeline de rendu basé Voxel pour l'exploration efficace de scènes larges et détaillées". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00650161.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, nous présentons une nouvelle approche efficace pour le rendu de scènes vastes et d'objets détaillés en temps réel. Notre approche est basée sur une nouvelle représentation pré-filtrée et volumique de la géométrie et un lancer de cone basé-voxel qui permet un rendu précis et haute performance avec une haute qualité de filtrage de géométries très détaillées. Afin de faire de cette représentation voxel une primitive de rendu standard pour le temps-réel, nous proposons une nouvelle approche basée sur les GPUs conçus entièrement pour passer à l'échelle et supporter ainsi le rendu des volumes de données très volumineux. Notre système permet d'atteindre des performances de rendu en temps réel pour plusieurs milliards de voxels. Notre structure de données exploite le fait que dans les scènes CG, les détails sont souvent concentrées sur l'interface entre l'espace libre et des grappes de densité et montre que les modèles volumétriques pourrait devenir une alternative intéressante en tant que rendu primitif pour les applications temps réel. Dans cet esprit, nous permettons à un compromis entre qualité et performances et exploitons la cohérence temporelle. Notre solution est basée sur une représentation hiérarchiques des données adaptées en fonction de la vue actuelle et les informations d'occlusion, couplé à un algorithme de rendu par lancer de rayons efficace. Nous introduisons un mécanisme de cache pour le GPU offrant une pagination très efficace de données dans la mémoire vidéo et mis en œuvre comme un processus data-parallel très efficace. Ce cache est couplé avec un pipeline de production de données capable de charger dynamiquement des données à partir de la mémoire centrale, ou de produire des voxels directement sur le GPU. Un élément clé de notre méthode est de guider la production des données et la mise en cache en mémoire vidéo directement à partir de demandes de données et d'informations d'utilisation émises directement lors du rendu. Nous démontrons notre approche avec plusieurs applications. Nous montrons aussi comment notre modèle géométrique pré-filtré et notre lancer de cones approximé peuvent être utilisés pour calculer très efficacement divers effets de flou ainsi d'éclairage indirect en temps réel.
6

Schertzer, Jérémie. "Exploiting modern GPUs architecture for real-time rendering of massive line sets". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT037.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, nous considérons des grands ensembles de lignes générés à partir de tractogrammes cérébraux. Ils décrivent des connexions neuronales représentées par des millions de fibres poly-lignes, comptant des milliards de segments. Grâce au mesh shader pipeline, nous construisons un moteur de rendu de tractogrammes aux performances surpassant l'état de l'art de deux ordres de grandeur.Nos performances proviennent des fiblets : une représentation compressée de blocs de segments. En combinant cohérence temporelle et dilatation morphologique du z-buffer, nous définissons un test d'occlusion rapide pour l'élimination de fiblets. Grâce à notre algorithme de décompression parallèle fortement optimisé, les fiblets survivants sont efficacement synthétisés en poly-lignes. Nous montrons également comment notre pipeline de fiblets accélère des fonctionnalités d'interactions avancées avec les tractogrammes.Pour le cas général du rendu des lignes, nous proposons la marche morphologique : une technique en espace écran qui rend des tubes d'épaisseur modifiable à partir des lignes fines rastérisées du G-buffer. En approximant un tube comme l'union de sphères densément réparties le long de ses axes, chaque sphère occupant chaque pixel est récupérée au moyen d'un filtre multi-passes de propagation de voisinage. Accéléré par le compute pipeline, nous atteignons des performances temps réel pour le rendu de lignes épaisses.Pour conclure notre travail, nous implémentons un prototype de réalité virtuelle combinant fiblets et marche morphologique. Il permet pour la première fois la visualisation immersive de grands tractogrammes constitués de fibres épaisses, ouvrant ainsi la voie à des perspectives diverses
In this thesis, we consider massive line sets generated from brain tractograms. They describe neural connections that are represented with millions of poly-line fibers, summing up to billions of segments. Thanks to the two-staged mesh shader pipeline, we build a tractogram renderer surpassing state-of-the-art performances by two orders of magnitude.Our performances come from fiblets: a compressed representation of segment blocks. By combining temporal coherence and morphological dilation on the z-buffer, we define a fast occlusion culling test for fiblets. Thanks to our heavily-optimized parallel decompression algorithm, surviving fiblets are swiftly synthesized to poly-lines. We also showcase how our fiblet pipeline speeds-up advanced tractogram interaction features.For the general case of line rendering, we propose morphological marching: a screen-space technique rendering custom-width tubes from the thin rasterized lines of the G-buffer. By approximating a tube as the union of spheres densely distributed along its axes, each sphere shading each pixel is retrieved relying on a multi-pass neighborhood propagation filter. Accelerated by the compute pipeline, we reach real-time performances for the rendering of depth-dependant wide lines.To conclude our work, we implement a virtual reality prototype combining fiblets and morphological marching. It makes possible for the first time the immersive visualization of huge tractograms with fast shading of thick fibers, thus paving the way for diverse perspectives
7

He, Yiyang. "A Physically Based Pipeline for Real-Time Simulation and Rendering of Realistic Fire and Smoke". Thesis, Stockholms universitet, Numerisk analys och datalogi (NADA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-160401.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the rapidly growing computational power of modern computers, physically based rendering has found its way into real world applications. Real-time simulations and renderings of fire and smoke had become one major research interest in modern video game industry, and will continue being one important research direction in computer graphics. To visually recreate realistic dynamic fire and smoke is a complicated problem. Furthermore, to solve the problem requires knowledge from various areas, ranged from computer graphics and image processing to computational physics and chemistry. Even though most of the areas are well-studied separately, when combined, new challenges will emerge. This thesis focuses on three aspects of the problem, dynamic, real-time and realism, to propose a solution in form of a GPGPU pipeline, along with its implementation. Three main areas with application in the problem are discussed in detail: fluid simulation, volumetric radiance estimation and volumetric rendering. The weights are laid upon the first two areas. The results are evaluated around the three aspects, with graphical demonstrations and performance measurements. Uniform grids are used with Finite Difference (FD) discretization scheme to simplify the computation. FD schemes are easy to implement in parallel, especially with ComputeShader, which is well supported in Unity engine. The whole implementation can easily be integrated into any real-world applications in Unity or other game engines that support DirectX 11 or higher.
8

Angalev, Mikhail. "Energy saving at gas compressor stations through the use of parametric diagnostics". Thesis, KTH, Energiteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101061.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Increasingly growing consumption of natural gas all around the world requires development of new transporting equipment and optimization of existing pipelines and gas pumping facilities. As a special case, Russian gas pumping system has the longest pipes with large diameter, which carry great amounts of natural gas. So, as reconstruction and modernization needs large investments, a need of more effective and low cost tool appeared. As a result diagnostics became the most wide-spread method for lifecycle assessment, and lifecycle extension for gas pumping units and pipelines.One of the most effective method for diagnostics of gas pumping units is parametric diagnostics. It is based on evaluation of measurement of several termo-gas dynamic parameters of gas pumping units, such as pressures, temperatures and rotational speed of turbines and compressors.In my work I developed and examined a special case of parametric diagnostics – methodic for evaluation of technical state and output parameters for gas pumping unit “Ural-16”. My work contains detailed analysis of various defects, classified by different GPU’s systems. The results of this analysis are later used in development of the methodic for calculation of output parameters for gas pumping unit.GPU is an extremely complex object for diagnostics. Around 200 combinations of Gas Turbine engines with centrifugal superchargers, different operational conditions and other aspects require development of separate methodic almost for each gas pumping unit type.Development of each methodic is a complex work which requires gathering of all possible parametric and statistical data for the examined gas pumping unit. Also parameters of compressed gas are measured. Thus as a result a number of equations are formed which finally allow to calculate such parameters as efficiency, fuel gas consumption and technical state coefficient which couldn’t be measured directly by existing measuring equipment installed on the gas compressor station.
9

Sand, Victor. "Dynamic Visualization of Space Weather Simulation Data". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112092.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The work described in this thesis is part of the Open Space project, a collaboration between Linköping University, NASA and the American Museum of Natural History. The long-term goal of Open Space is a multi-purpose, open-source scientific visualization software. The thesis covers the research and implementation of a pipeline for preparing and rendering volumetric data. The developed pipeline consists of three stages: A data formatting stage which takes data from various sources and prepares it for the rest of the pipeline, a pre-processing stage which builds a tree structure of of the raw data, and finally an interactive rendering stage which draws a volume using ray-casting. The pipeline is a fully working proof-of-concept for future development of Open Space, and can be used as-is to render space weather data using a combination of suitable data structures and an efficient data transfer pipeline. Many concepts and ideas from this work can be utilized in the larger-scale software project.
10

Tjia, Andrew Hung Yao. "Adaptive pipelined work processing for GPS trajectories". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43288.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Adaptive pipelined work processing is a system paradigm that optimally processes trajectories created by GPS-enabled devices. Systems that execute GPS trajectory processing are often constrained at the client side by limitations of mobile devices such as processing power, energy usage, and network. The server must deal with non-uniform processing workloads and flash crowds generated by surges in popularity. We demonstrate that adaptive processing is a solution to these problems by building a trajectory processing system that uses adaptivity to respond to changing workloads and network conditions, and is fault tolerant. This benefits application designers, who design operations on data instead of manual system optimization and resource management. We evaluate our method by processing a dataset of snow sports trajectories and show that our method is extensible to other operators and other kinds of data.
11

Es, S. Alphan. "Accelerated Ray Tracing Using Programmable Graphics Pipelines". Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609307/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The graphics hardware have evolved from simple feed forward triangle rasterization devices to flexible, programmable, and powerful parallel processors. This evolution allows the researchers to use graphics processing units (GPU) for both general purpose computations and advanced graphics rendering. Sophisticated GPUs hold great opportunities for the acceleration of computationally expensive photorealistic rendering methods. Rendering of photorealistic images in real-time is a challenge. In this work, we investigate efficient ways to utilize GPUs for real-time photorealistic rendering. Specifically, we studied uniform grid based ray tracing acceleration methods and GPU friendly traversal algorithms. We show that our method is faster than or competitive to other GPU based ray tracing acceleration techniques. The proposed approach is also applicable to the fast rendering of volumetric data. Additionally, we devised GPU based solutions for real-time stereoscopic image generation which can be used in companion with GPU based ray tracers.
12

Zhang, Jianwei, Dave Kudrna, Ting Mu, Weiming Li, Dario Copetti, Yeisoo Yu, Jose Luis Goicoechea, Yang Lei e Rod A. Wing. "Genome puzzle master (GPM): an integrated pipeline for building and editing pseudomolecules from fragmented sequences". OXFORD UNIV PRESS, 2016. http://hdl.handle.net/10150/621468.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Motivation: Next generation sequencing technologies have revolutionized our ability to rapidly and affordably generate vast quantities of sequence data. Once generated, raw sequences are assembled into contigs or scaffolds. However, these assemblies are mostly fragmented and inaccurate at the whole genome scale, largely due to the inability to integrate additional informative datasets (e.g. physical, optical and genetic maps). To address this problem, we developed a semi-automated software tool-Genome Puzzle Master (GPM)-that enables the integration of additional genomic signposts to edit and build 'new-gen-assemblies' that result in high-quality 'annotation-ready' pseudomolecules. Results: With GPM, loaded datasets can be connected to each other via their logical relationships which accomplishes tasks to 'group,' 'merge,' 'order and orient' sequences in a draft assembly. Manual editing can also be performed with a user-friendly graphical interface. Final pseudomolecules reflect a user's total data package and are available for long-term project management. GPM is a web-based pipeline and an important part of a Laboratory Information Management System (LIMS) which can be easily deployed on local servers for any genome research laboratory.
13

Malenta, Mateusz. "Exploring the dynamic radio sky with many-core high-performance computing". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/exploring-the-dynamic-radio-sky-with-manycore-highperformance-computing(fe86c963-e253-48c0-a907-f8b59c44cf53).html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As new radio telescopes and processing facilities are being built, the amount of data that has to be processed is growing continuously. This poses significant challenges, especially if the real-time processing is required, which is important for surveys looking for poorly understood objects, such as Fast Radio Bursts, where quick detection and localisation can enable rapid follow-up observations at different frequencies. With the data rates increasing all the time, new processing techniques using the newest hardware, such as GPUs, have to be developed. A new pipeline, called PAFINDER, has been developed to process data taken with a phased array feed, which can generate up to 36 beams on the sky, with data rates of 25 GBps per beam. With the majority of work done on GPUs, the pipeline reaches real-time performance when generating filterbank files used for offline processing. The full real-time processing, including single-pulse searches has also been implemented and has been shown to perform well under favourable conditions. The pipeline was successfully used to record and process data containing observations of RRAT J1819-1458 and positions on the sky where 3 FRBs have been observed previously, including the repeating FRB121102. Detailed examination of J1819-1458 single-pulse detections revealed a complex emission environment with pulses coming from three different rotation phase bands and a number of multi-component emissions. No new FRBs and no repeated bursts from FRB121102 have been detected. The GMRT High Resolution Southern Sky survey observes the sky at high galactic latitudes, searching for new pulsars and FRBs. 127 hours of data have been searched for the presence of any new bursts, with the help of new pipeline developed for this survey. No new FRBs have been found, which can be the result of bad RFI pollution, which was not fully removed despite new techniques being developed and combined with the existing solutions to mitigate these negative effects. Using the best estimates on the total amount of data that has been processed correctly, obtained using new single-pulse simulation software, no detections were found to be consistent with the expected rates for standard candle FRBs with a flat or positive spectrum.
14

Ayala, Cabrera David. "Characterization of components of water supply systems from GPR images and tools of intelligent data analysis". Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/59235.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
[EN] Over time, due to multiple operational and maintenance activities, the networks of water supply systems (WSSs) undergo interventions, modifications or even are closed. In many cases, these activities are not properly registered. Knowledge of the paths and characteristics (status and age, etc.) of the WSS pipes is obviously necessary for efficient and dynamic management of such systems. This problem is greatly augmented by considering the detection and control of leaks. Access to reliable leakage information is a complex task. In many cases, leaks are detected when the damage is already considerable, which brings high social and economic costs. In this sense, non-destructive methods (e.g., ground penetrating radar - GPR) may be a constructive response to these problems, since they allow, as evidenced in this thesis, to ascertain paths of pipes, identify component characteristics, and detect primordial water leaks. Selection of GPR in this work is justified by its characteristics as non-destructive technique that allows studying both metallic and non-metallic objects. Although the capture of information with GPR is usually successful, such aspects as the capture settings, the large volume of generated information, and the use and interpretation of such information require high level of skill and experience. This dissertation may be seen as a step forward towards the development of tools able to tackle the problem of lack of knowledge on the WSS buried assets. The main objective of this doctoral work is thus to generate tools and assess their feasibility of application to the characterization of components of WSSs from GPR images. In this work we have carried out laboratory tests specifically designed to propose, develop and evaluate methods for the characterization of the WSS buried components. Additionally, we have conducted field tests, which have enabled us to determine the feasibility of implementing such methodologies under uncontrolled conditions. The methodologies developed are based on techniques of intelligent data analysis. The basic principle of this work has involved the processing of data obtained through the GPR to look for useful information about WSS components, with special emphasis on the pipes. After performing numerous activities, one can conclude that, using GPR images, it is feasible to obtain more information than the typical identification of hyperbolae currently performed. In addition, this information can be observed directly, e.g. more simply, using the methodologies proposed in this doctoral work. These methodologies also prove that it is feasible to identify patterns (especially with the preprocessing algorithm termed Agent race) that provide fairly good approximation of the location of leaks in WSSs. Also, in the case of pipes, one can obtain such other characteristics as diameter and material. The main outcomes of this thesis consist in a series of tools we have developed to locate, identify and visualize WSS components from GPR images. Most interestingly, the data are synthesized and reduced so that the characteristics of the different components of the images recorded in GPR are preserved. The ultimate goal is that the developed tools facilitate decision-making in the technical management of WSSs, and that such tools can even be operated by personnel with limited experience in handling non-destructive methodologies, specifically GPR.
[ES] Con el paso del tiempo, y debido a múltiples actividades operacionales y de mantenimiento, las redes de los sistemas de abastecimiento de agua (SAAs) sufren intervenciones, modificaciones o incluso, son clausuradas, sin que, en muchos casos, estas actividades sean correctamente registradas. El conocimiento de los trazados y características (estado y edad, entre otros) de las tuberías en los SAAs es obviamente necesario para una gestión eficiente y dinámica de tales sistemas. A esta problemática se suma la detección y el control de las fugas de agua. El acceso a información fiable sobre las fugas es una tarea compleja. En muchos casos, las fugas son detectadas cuando los daños en la red son ya considerables, lo que trae consigo altos costes sociales y económicos. En este sentido, los métodos no destructivos (por ejemplo, ground penetrating radar - GPR), pueden ser una respuesta a estas problemáticas, ya que permiten, como se pone de manifiesto en esta tesis, localizar los trazados de las tuberías, identificar características de los componentes y detectar las fugas de agua cuando aún no son significativas. La selección del GPR, en este trabajo se justifica por sus características como técnica no destructiva, que permite estudiar tanto objetos metálicos como no metálicos. Aunque la captura de información con GPR suele ser exitosa, la configuración de la captura, el gran volumen de información, y el uso y la interpretación de la información requieren de alto nivel de habilidad y experiencia por parte del personal. Esta tesis doctoral se plantea como un avance hacia el desarrollo de herramientas que permitan responder a la problemática del desconocimiento de los activos enterrados de los SAAs. El objetivo principal de este trabajo doctoral es, pues, generar herramientas y evaluar la viabilidad de su aplicación en la caracterización de componentes de un SAA, a partir de imágenes GPR. En este trabajo hemos realizado ensayos de laboratorio específicamente diseñados para plantear, elaborar y evaluar metodologías para la caracterización de los componentes enterrados de los SAAs. Adicionalmente, hemos realizado ensayos de campo, que han permitido determinar la viabilidad de aplicación de tales metodologías bajo condiciones no controladas. Las metodologías elaboradas están basadas en técnicas de análisis inteligentes de datos. El principio básico de este trabajo ha consistido en el tratamiento adecuado de los datos obtenidos mediante el GPR, a fin de buscar información de utilidad para los SAAs respecto a sus componentes, con especial énfasis en las tuberías. Tras la realización de múltiples actividades, se puede concluir que es viable obtener más información de las imágenes de GPR que la que actualmente se obtiene con la típica identificación de hipérbolas. Esta información, además, puede ser observada directamente, de manera más sencilla, mediante las metodologías planteadas en este trabajo doctoral. Con estas metodologías se ha probado que también es viable la identificación de patrones (especialmente el pre-procesado con el algoritmo Agent race) que proporcionan aproximación bastante acertada de la localización de las fugas de agua en los SAAs. También, en el caso de las tuberías, se puede obtener otro tipo de características tales como el diámetro y el material. Como resultado de esta tesis se han desarrollado una serie de herramientas que permiten visualizar, identificar y localizar componentes de los SAAs a partir de imágenes de GPR. El resultado más interesante es que los resultados obtenidos son sintetizados y reducidos de manera que preservan las características de los diferentes componentes registrados en las imágenes de GPR. El objetivo último es que las herramientas desarrolladas faciliten la toma de decisiones en la gestión técnica de los SAAs y que tales herramientas puedan ser operadas incluso por personal con una experiencia limitada en el manejo
[CAT] Amb el temps, a causa de les múltiples activitats d'operació i manteniment, les xarxes de sistemes d'abastament d'aigua (SAAs) se sotmeten a intervencions, modificacions o fins i tot estan tancades. En molts casos, aquestes activitats no estan degudament registrats. El coneixement dels camins i característiques (estat i edat, etc.) de les canonades d'aigua i sanejament fa evident la necessitat d'una gestió eficient i dinàmica d'aquests sistemes. Aquest problema es veu augmentat en gran mesura tenint en compte la detecció i control de fuites. L'accés a informació fiable sobre les fuites és una tasca complexa. En molts casos, les fugues es detecten quan el dany ja és considerable, el que porta costos socials i econòmics. En aquest sentit, els mètodes no destructius (per exemple, ground penetrating radar - GPR) poden ser una resposta constructiva a aquests problemes, ja que permeten, com s'evidencia en aquesta tesi, per determinar rutes de canonades, identificar les característiques dels components, i detectar les fuites d'aigua quan encara no són significatives. La selecció del GPR en aquest treball es justifica per les seves característiques com a tècnica no destructiva que permet estudiar tant objectes metàl·lics i no metàl·lics. Tot i que la captura d'informació amb GPR sol ser reeixida, aspectes com ara la configuració de captura, el gran volum d'informació que es genera, i l'ús i la interpretació d'aquesta informació requereix alt nivell d'habilitat i experiència. Aquesta tesi pot ser vista com un pas endavant cap al desenvolupament d'eines capaces d'abordar el problema de la manca de coneixement sobre els actius d'aigua i sanejament enterrat. L'objectiu principal d'aquest treball doctoral és, doncs, generar eines i avaluar la seva factibilitat d'aplicació a la caracterització dels components de los SAAs, a partir d'imatges GPR. En aquest treball s'han dut a terme proves de laboratori específicament dissenyats per proposar, desenvolupar i avaluar mètodes per a la caracterització dels components d'aigua i sanejament soterrat. A més, hem dut a terme proves de camp, que ens han permès determinar la viabilitat de la implementació d'aquestes metodologies en condicions no controlades. Les metodologies desenvolupades es basen en tècniques d'anàlisi intel·ligent de dades. El principi bàsic d'aquest treball ha consistit en el tractament de dades obtingudes a través del GPR per buscar informació útil sobre els components d'SAA, amb especial èmfasi en la canonades. Després de realitzar nombroses activitats, es pot concloure que, amb l'ús d'imatges de GPR, és factible obtenir més informació que la identificació típica d'hipèrboles realitzat actualment. A més, aquesta informació pot ser observada directament, per exemple, més simplement, utilitzant les metodologies proposades en aquest treball doctoral. Aquestes metodologies també demostren que és factible per identificar patrons (especialment el pre-processat amb l'algoritme Agent race) que proporcionen bastant bona aproximació de la localització de fuites en SAAs. També, en el cas de tubs, es pot obtenir altres característiques com ara el diàmetre i el material. Els principals resultats d'aquesta tesi consisteixen en una sèrie d'eines que hem desenvolupat per localitzar, identificar i visualitzar els components dels SAAS a partir d'imatges GPR. El resultat més interessant és que els resultats obtinguts són sintetitzats i reduïts de manera que preserven les característiques dels diferents components registrats en les imatges de GPR. L'objectiu final és que les eines desenvolupades faciliten la presa de decisions en la gestió tècnica de SAA, i que tals eines poden fins i tot ser operades per personal amb poca experiència en el maneig de metodologies no destructives, específicament GPR.
Ayala Cabrera, D. (2015). Characterization of components of water supply systems from GPR images and tools of intelligent data analysis [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59235
TESIS
Premiado
15

Chen, Chi-Chung, e 陳啟中. "Efficient and Robust Pipeline Design for Multi-GPU DNN Training through Model Parallelism". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4s7avh.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣大學
資訊工程學研究所
106
The training process of Deep Neural Network (DNN) is compute-intensive, often taking days to weeks to train a DNN model. Therefore, parallel execution of DNN training on GPUs is a widely adopted approach to speed up process nowadays. Due to the implementation simplicity, data parallelism is currently the most commonly used parallelization method. Nonetheless, data parallelism suffers from excessive inter-GPU communication overhead due to frequent weight synchronization among GPUs. Another approach is model parallelism, which partitions model among GPUs. This approach can significantly reduce inter-GPU communication cost compared to data parallelism, however, maintaining load balance is a challenge. Moreover, model parallelism faces the staleness issue; that is, gradients are computed with stale weights. In this thesis, we propose a novel model parallelism method, which achieves load balance by concurrently executing forward and backward passes of two batches, and resolves the staleness issue with weight prediction. The experimental results show that our proposal achieves up to 15.77x speedup compared to data parallelism and up to 2.18x speedup compared to the state-of-the-art model parallelism method without incurring accuracy loss.
16

Lu, Chia-Hao, e 陸佳壕. "A neural network model based on OpenGL pipeline parameters for GPU rendering power estimation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/pud486.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立中山大學
資訊工程學系研究所
107
Nowadays, as technology is getting progress, power consumption becomes a non-ignore feature for modern computer development. Too much power consumption cause problems such as thermal collapse, less endurance, and wasted energy …etc. Therefore, when designing a computer system, there must be a power estimation model to help us manage and evaluate power consumption. The GPU is the most power-consuming component, so there is much research to build power estimation model for the GPU. However, in the GPU graphic rendering power estimation, most of the study is only suitable for embedded systems and not for desktop computers. Therefore, this paper proposes a GPU power estimation neural network model based on OpenGL pipeline parameters, which makes use of OpenGL compatibility. So this power model can use in all embedded or desktop computers supporting OpenGL. Moreover, more power-related features have considered than in previous researches. An advanced machine learning method called neural networks is used to train our power model. So our power model has more accurate power estimation results. Also, this paper proposed a series of methods for extracting rendering data from OpenGL specifications and converting them into OpenGL pipeline parameters.
17

楊宗寶. "The Study of GPR in the Underground Pipeline Investigation". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/76011984096132465560.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
明新科技大學
營建工程與管理研究所
97
In recent years because city of the development quick、the population is intensive, according to security consideration and under the view consideration, with the result that use underground pipeline engineering gradually numerous complicated. However being newly set up engineering or routine maintenance pipeline has to want to renew to often need the opening of the work area big area to dig, and all of the position of the construction ex underground pipeline rely on an old pipeline construction diagram, therefore the mistake digs a pipeline to make the engineering delayed、the loss of the wealth and properties even all slightly has to smell. Ground Penetrating Radar (GPR) is a new nondestructive method in the 21st Century, the usage nondestructive examination method is already the trend of future development, The engineering carries on front with can take type the instrument is having never needed to open to dig、 Under the premise of perturbation earth's surface or structure thing material, detect underground pipeline position and tube path, for Water pipeline、gas pipe、 electric power pipe 、telecommunication pipeline and the sewer etc maintenance of appearance, all is great to help. Through make use of Ground Penetrating Radar of probe into a technique, in now pipeline the construction is inside the scope carry on probing into a ratio rightness, will be able to reduce a mistake to dig, reduce to open to dig amount of homework in the meantime, then promote to open to dig security and quality of the engineering. In this study use Minghsin University of Science and Technology in 93 academic years through the Ministry of Education 「 promote the project of the university competition ability 」 particularly the style subsidizes purchase of SIR-20 Ground Penetrating Radar, and carry on an underground pipeline to probe into in school, will be laying an underground pipeline to be in front and back carry on probing into to do a comparison, also store deeply land mine to attain history data and sorting, Can immediately know after laying the position of underground pipeline have already made no difference while laying with actual construction, finally probe into a result can be said to be rather accurate, all can probe into the position of the pipeline place. Expecting this study result can contribute to in the days to come probe into an underground pipeline can effectively reduce a mistake to dig risk, reduce baffling transportation on the construction or Bother the people's number of times.
18

Cheng, An-Ting, e 鄭安庭. "A GPU Accelerated, Pipelined and Multi-Thread Framework for Long Noisy Genome Sequence". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/6ert9e.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立交通大學
電子研究所
107
Compared with traditional DNA sequencing technologies, Third generation sequencing(TGS) such as Oxford Nanopore can generate relatively long reads within a short amount of time using a portable device. However, these reads generated often have a high error rate which ranges from 15\% to 35\%. Among the existing popular alignment algorithms, only very few are designed to handle long reads with such a high error rate. The speed of these algorithms also tends to be slower, suggesting their efficiency could be further improved. We modify the COSINE algorithm in and effectively accelerate its overall computation by taking advantage of the power of Nvidia GPGPU, optimizing data sharing, and managing the pipeline between CPU and GPGPU. As a result, we provide a fast DNA sequence alignment framework that supports various input formats including FASTQ, FASTA, SAM, with both single-end and pair-end reads. Compared with the original COSINE algorithm, we achieve a higher throughput rate of 1.5 to 5.5 times faster, while maintaining its accuracy.
19

Hsieh, Ginger, e 謝鈞傑. "A Pipelined Sequence Alignment Algorithm on Multiple GPUs". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3bu7zf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣科技大學
電子工程系
106
In recent years, DNA sequencing techniques and the management of bioinformatics have been developed prosperously. Sequence alignment is usually solved by heuristic methods due to the excessive computation times of the exact methods. Smith-Waterman(SW) is an exact algorithm to search local alignment. GPUs are highly parallel architectures which are suitable for executing data parallel problems. Thus it is worthwhile to develop parallel algorithms on multiple GPUs to accelerate SW computation. The aim of this thesis is to propose a pipelined sequence alignment algorithm on multiple GPUs for speeding up the sequence alignment of huge sequences. Consecutive rows of the DP matrix are grouped as an execution-bank, which is assigned to a block for execution. Thus, the DP matrix can be regarded as a sequence of execution-banks. The current block writes the state of the last row of its own execution-bank into a pipeline for the reading of the next block. The manager process in the host is only in charge of streamlining kernel invocations and storing of the special rows. Thus this pipelined fashion of proposed algorithm can have as little synchronization as possible, resulting in speeding up the system performance significantly.
20

Liu, Thou-Chieh, e 劉守傑. "Study On the Feasibility of Applying GPR to Quality Control for Pipeline Construction". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/es6nvk.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺北科技大學
土木與防災研究所
101
In recent years, in order to enhance people''s quality of life, the Taipei City Government to promote the project through the road level and positive handling road construction and maintenance management operations, the lack of reason from the past to review and actively improve the quality of service to improve the road. Under the current city and county roads pipelines laid intricate, accompanied by the growing number of cases also apply to dig people and caused inconvenience to road users, after all, industrial and consumer use pipelines variety and volume, so when performing pipeline laying, maintenance or migration action must be excavated for road construction. However, the construction unit of varying quality have tended to backfill compaction is not real, human (hand) around the hole caused by the elevation difference uneven pavement, making the existing road pavement cracks, rutting and potholes and other circumstances occur, seriously affecting the smooth road reduced to the condition that at the standard of services and the public road traffic of inconvenience. In this study, through non-destructive testing complement existing mechanisms for the construction of the missing after checking in advance underground pipeline detection position detection time from the point of view requires only 8% of the original, about 20 minutes or so that you can test is completed, the test results are with 95% confidence that the error is detected in the allowable error within. Therefore, this study suggests that penetrating radar detection of the pipeline on the road is feasible.
21

Chien-Ting, Wu, e 吳建廷. "A Study of The Use of INS/GPS in Positioning Pipelines". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/23985134988059092927.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立交通大學
土木工程系
88
There are many successful examples in using of INS/GPS in the world, mostly apply to military and navigation. In short baseline, the precision of Kinematic GPS and Real-time kinematic (RTK) have already reached to the level of centimeter. The price of high precision INS instrument is expensive. To use cheaper INS instrument for positioning when GPS surveying is not work. So the integrating GPS with INS avoid the defects of each system, and raise the accuracy of positioning results. In this study, two simulative INS data and data collected from the Center for Mapping at the Ohio State University is applied for positioning. In addition, the decentralized Kalman Filter and 15 parameters INS model for data processing is used. Moreover, the data processing procedure of the Kalman Filter is modified to get automatic and real time positioning object. From the results, it was found that using divide data and adjust the Kalman Filter system noise covariance matrix automatically can combine both system into the optimum state. During the intentional gaps of GPS observation of the OSU practical data, the difference of INS position and KGPS solution reaches 35cm within 20 seconds. But it use more time to compute. On the other hand, use two different system noise covariance matrix which has GPS signal or not, the result batter then use a system noise covariance matrix with GPS signal or not.
22

Chun-HaoYang e 楊濬豪. "Detect the Crack of Wood Structure and Identify the Materials of Underground pipeline Using GPR". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/8nn5py.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Tiwari, Manasi. "Communication Overlapping Krylov Subspace Methods for Distributed Memory Systems". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5990.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many high performance computing applications in computational fluid dynamics, electromagnetics etc. need to solve a linear system of equations $Ax=b$. For linear systems where $A$ is generally large and sparse, Krylov Subspace methods (KSMs) are used. In this thesis, we propose communication overlapping KSMs. We start with the Conjugate Gradient (CG) method, which is used when $A$ is sparse symmetric positive definite. Recent variants of CG include a Pipelined CG (PIPECG) method which overlaps the allreduce in CG with independent computations i.e., one Preconditioner (PC) and one Sparse Matrix Vector Product (SPMV). As we move towards the exascale era, the time for global synchronization and communication in allreduce increases with the large number of cores available in the exascale systems, and the allreduce time becomes the performance bottleneck which leads to poor scalability of CG. Therefore, it becomes necessary to reduce the number of allreduces in CG and adequately overlap the larger allreduce time with more independent computations than the independent computations provided by PIPECG. Towards this goal, we have developed PIPECG-OATI (PIPECG-One Allreduce per Two Iterations) which reduces the number of allreduces from three per iteration to one per two iterations and overlaps it with two PCs and two SPMVs. For better scalability with more overlapping, we also developed the Pipelined s-step CG method which reduces the number of allreduces to one per s iterations and overlaps it with s PCs and s SPMVs. We compared our methods with state-of-art CG variants on a variety of platforms and demonstrated that our method gives 2.15x - 3x speedup over the existing methods. We have also generalized our research with parallelization of CG on multi-node CPU systems in two dimensions. Firstly, we have developed communication overlapping variants of KSMs other than CG, including Conjugate Residual (CR), Minimum Residual (MINRES) and BiConjugate Gradient Stabilised (BiCGStab) methods for matrices with different properties. The pipelined variants give up to 1.9x, 2.5x and 2x speedup over the state-of-the-art MINRES, CR and BiCGStab methods respectively. Secondly, we developed communication overlapping CG variants for GPU accelerated nodes, where we proposed and implemented three hybrid CPU-GPU execution strategies for the PIPECG method. The first two strategies achieve task parallelism and the last method achieves data parallelism. Our experiments on GPUs showed that our methods give 1.45x - 3x average speedup over existing CPU and GPU-based implementations. The third method gives up to 6.8x speedup for problems that cannot be fit in GPU memory. We also implemented GPU related optimizations for the PIPECG-OATI method and show performance improvements over other GPU implementations of PCG and PIPECG on multiple nodes with multiple GPUs.

Vai alla bibliografia