To see the other types of publications on this topic, follow the link: Performance rendering.

Dissertations / Theses on the topic 'Performance rendering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Performance rendering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hensley, Justin Lastra Anselmo. "Increasing rendering performance of graphics hardware." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,803.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Dec. 18, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
2

Larsen, Matthew. "Performance Modeling of In Situ Rendering." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22297.

Full text
Abstract:
With the push to exascale, in situ visualization and analysis will play an increasingly important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources. In this work, we present new statistical performance models, based on algorithmic complexity, that accurately predict the run-time cost of a set of representative rendering algorithms, an essential in situ visualization task. To train and validate the models, we create data-parallel rendering algorithms within a light-weight in situ infrastructure, and we conduct a performance study of an MPI+X rendering infrastructure used in situ with three HPC simulation applications. We then explore feasibility issues using the model for selected in situ rendering questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Poliakov, Vladislav. "Light Performance Comparison betweenForward, Deferred and Tile-basedforward rendering." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20584.

Full text
Abstract:
Background. In this experiment forward, deferred and tile-based forward rendering techniques are implemented to research about the light-rendering performance of these rendering techniques. Nowadays most games and programs contains a graphical content and this graphical content is done by using different kind of rendering operations. These rendering operations is being developed and optimized by graphic programmers in order to show better performance. Forward rendering is the standard technique that pushes the geometry data through the whole rendering pipeline to build up the final image. Deferred rendering on the other hand is divided into two passes where the first pass rasterizes the geometry data into g-buffers and the second pass, also called lighting pass, uses the data from g-buffers and rasterizes the lightsources to build up the final image. Next rendering technique is tile-based forward rendering, is also divided into two passes. The first pass creates a frustum grid and performs light culling. The second pass rasterizes all the geometry data to the screen as the standard forward rendering technique. Objectives. The objective is to implement three rendering techniques in order to find the optimal technique for light-rendering in different environments. When the implementation process is done, analyze the result from tests to answer the research questions and come to a conclusion. Methods. The problem was answered by using method "Implementation and Experimentation". A render engine with three different rendering techniques was implemented using C++ and OpenGL API. The tests were implemented in the render engine and the duration of each test was five minutes. The data from the tests was used to create diagrams for result evaluation. Results. The results showed that standard forward rendering was stronger than tile based forward rendering and deferred rendering with few lights in the scene.When the light amount became large deferred rendering showed the best light performance results. Tile-based forward rendering wasn’t that strong as expected and the reason can possibly be the implementation method, since different culling procedures were performed on the CPU-side. During the tests of tile-based forward rendering there were 4 tiles used in the frustum grid since this amount showed highest performance compared to other tile-configurations. Conclusions. After all this research a conclusion was formed as following, in environments with limited amount of lightsources the optimal rendering technique was the standard forward rendering. In environments with large amount of lightsources deferred rendering should be used. If tile-based forward rendering is used, then it should be used with 4 tiles in the frustum grid. The hypothesis of this study wasn’t fully confirmed since only the suggestion with limited amount of lights were confirmed, the other parts were disproven. The tile-based forward rendering wasn’t strong enough and the reason for this is possibly that the implementation was on the CPU-side.
APA, Harvard, Vancouver, ISO, and other styles
4

Zellmann, Stefan [Verfasser], and Ulrich [Akademischer Betreuer] Lang. "Interactive High Performance Volume Rendering / Stefan Zellmann. Gutachter: Ulrich Lang." Köln : Universitäts- und Stadtbibliothek Köln, 2014. http://d-nb.info/1056999675/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

le, Clercq Anton, and Kristoffer Almroth. "Comparison of Rendering Performance Between Multimedia Libraries Allegro, SDL and SFML." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259756.

Full text
Abstract:
In this report the rendering performances of the multimedia libraries Allegro, SDL and SFML have been compared. Highest performance is achieved by writing code directly to the low level graphical APIs, though it requires much more work than using the multimedia libraries graphical functions built on one of these graphical APIs. Thus it is common to use a multimedia library or similar for visualization tasks. The total number of frames rendered in one second was counted for static, alpha blended, rotating, and moving images in each library. Every test was run with few to very many images, and the programs were tested on six different computers: three laptops with integrated GPUs and low power dual core CPUs, and three desktop computers with external GPUs and quad core CPUs with unlocked clock rate. Allegro performed best of the three on laptops when the image load was very high, but fell behind by up to 50% in all other cases. SDL had the strongest performance on desktop computers, especially when rendering very many images, making it a prime candidate for high load graphical applications on desktops. SFML performed best overall, making it the best choice when targeting a wide range of different machines.
I denna rapport jämförs renderingsprestandan mellan multimediabiblioteken Allegro, SDL och SFML. Den högsta prestandan uppnås genom att skriva kod direkt till en lågnivå-API för grafik, men det kräver mycket mer kod än att använda ett multimediabibliotek. Därför är det vanligt att använda ett multimediabibliotek eller något med liknande funktioner för visualiseringsarbeten. Jämförelsen bestod av att räkna det totala antalet skärmbilder som renderades under en sekund för statiska, semitransparanta, rotarande och rörliga bilder. Varje test kördes med 50 till 10 000 bilder som renderades samtidigt, och programmen testades på sex olika datorer, tre bärbara med integrerade GPUs och tvåkärniga energieffektiva CPUs, och tre stationära med externa GPUs och fyrkärniga CPUs med upplåst klockfrekvens. Allegro presterade bäst på bärbara datorer under en hög belastning, men var upp till 50% sämre i alla övriga tester. SDL presterade bäst på stationära datorer, därför är det ett bra val för krävande grafiska program på stationära datorer. SFML presterade bäst överlag, vilket gör det till det bästa valet för att skapa grafiska program som är tänkta att köras på olika starka datorer.
APA, Harvard, Vancouver, ISO, and other styles
6

Pajot, Anthony. "Toward robust and efficient physically-based rendering." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/2801/.

Full text
Abstract:
Le rendu fondé sur la physique est utilisé pour le design, l'illustration ou l'animation par ordinateur. Ce type de rendu produit des images photo-réalistes en résolvant les équations qui décrivent le transport de la lumière dans une scène. Bien que ces équations soient connues depuis longtemps, et qu'un grand nombre d'algorithmes aient été développés pour les résoudre, il n'en existe pas qui puisse gérer de manière efficace toutes les scènes possibles. Plutôt qu'essayer de développer un nouvel algorithme de simulation d'éclairage, nous proposons d'améliorer la robustesse de la plupart des méthodes utilisées à ce jour et/ou qui sont amenées à être développées dans les années à venir. Nous faisons cela en commençant par identifier les sources de non-robustesse dans un moteur de rendu basé sur la physique, puis en développant des méthodes permettant de minimiser leur impact. Le résultat de ce travail est un ensemble de méthodes utilisant différents outils mathématiques et algorithmiques, chacune de ces méthodes visant à améliorer une partie spécifique d'un moteur de rendu. Nous examinons aussi comment les architectures matérielles actuelles peuvent être utilisées à leur maximum afin d'obtenir des algorithmes plus rapides, sans ajouter d'approximations. Bien que les contributions présentées dans cette thèse aient vocation à être combinées, chacune d'entre elles peut être utilisée seule : elles sont techniquement indépendantes les unes des autres
Physically-based rendering is used for design, illustration or computer animation. It consists in producing photorealistic images by solving the equations which describe how light travels in a scene. Although these equations have been known for a long time and many algorithms for light simulation have been developed, no algorithm exists to solve them efficiently for any scene. Instead of trying to develop a new algorithm devoted to light simulation, we propose to enhance the robustness of most methods used nowadays and/or which can be developed in the years to come. We do this by first identifying the sources of non-robustness in a physically-based rendering engine, and then addressing them by specific algorithms. The result is a set of methods based on different mathematical or algorithmic methods, each aiming at improving a different part of a rendering engine. We also investigate how the current hardware architectures can be used at their maximum to produce more efficient algorithms, without adding approximations. Although the contributions presented in this dissertation are meant to be combined, each of them can be used in a standalone way: they have been designed to be internally independent of each other
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Ping. "A performance evaluation framework of multi-resolution algorithms for real-time rendering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60363.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rahm, Marcus. "Forward plus rendering performance using the GPU vs CPU multi-threading. : A comparative study of culling process in Forward plus." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14894.

Full text
Abstract:
Context. The rendering techniques in games have the goal of shading the scene with as high of a quality as possible while being as efficient as possible. With more advanced tools being developed such as a compute shader. It has allowed for more efficient speed up of the shading process. One rendering technique that makes use of this, is Forward plus rendering. Forward plus rendering make use of a compute shader to perform a culling pass of all the lights. However, not all computers can make use of compute shaders. Objectives. The aims of this thesis are to investigate the performance of using the CPU to perform the light culling required by the Forward plus rendering technique, comparing it to the performance of a GPU implementation. With that, the aim is also to explore if the CPU can be an alternative solution for the light culling by the Forward plus rendering technique. Methods. The standard Forward plus is implemented using a compute shader. After which Forward plus is then implemented using CPU multithreaded to perform the light culling. Both versions of Forward plus are evaluated by sampling the frames per second during the tests with specific properties. Results. The results show that there is a difference in performance between the CPU and GPU implementation of Forward plus. This difference is fairly significant as with 256 lights rendered the GPU implementation has 126% more frames per second over the CPU implementation of Forward plus. However, the results show that the performance of the CPU implementation of Forward plus is viable. As the performance stays above 30 frames per second with less than 2048 lights in the scene. The performance also outperforms the performance of basic Forward rendering. Conclusions. The conclusion of this thesis shows that multi-threaded CPU can be used for culling lights for Forward plus rendering. It is also a viable chose over basic Forward rendering. With 64 lights the CPU implementation performs with 133% more frames per second over the basic Forward rendering.
APA, Harvard, Vancouver, ISO, and other styles
9

Dahlin, Alexander. "Improving Ray Tracing Performance with Variable Rate Shading." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21945.

Full text
Abstract:
Background. Hardware-accelerated ray tracing has enabled ray traced reflections for real-time applications such as games. However, the number of rays traced each frame must be kept low to achieve expected frame rates. Therefore, techniques such as rendering the reflections at quarter resolution are used to limit the number of rays traced each frame. The new hardware features inline ray tracing, and hardware variable rate shading (VRS) could be combined to limit the rays even further. Objectives. The first goal is to use hardware VRS to limit the number of rays even further than rendering the reflections at quarter resolution, while maintaining the visual quality in the final rendered image. The second goal is to determine if inline ray tracing provides better performance than using ray generation shaders. Methods. Experiments are performed on a ray traced reflections pipeline using different techniques to generate rays. The techniques use inline ray tracing, inline ray tracing combined with VRS, and ray generation shaders. These are compared and evaluated using performance tests and the image evaluator \FLIP. Results. The results show that limiting the number of rays with hardware VRS result in a performance increase. The difference in visual quality between using inline ray tracing with VRS and previous techniques remain comparable. The performance tests show that inline ray tracing performs worse than ray generation shaders with increased scene complexity. Conclusions. The conclusion is that hardware VRS can be used to limit the number of rays and achieve better performance while visual quality remain comparable to previous techniques. Inline ray tracing does not perform better than ray generation shaders for workloads similar to ray traced reflections.
Bakgrund. Hårdvaruaccelererad strålspårning har möjliggjort strålspårade reflektioner för realtidsapplikationer såsom spel. Däremot måste antalet strålar som spåras hållas lågt för att förväntade bildfrekvenser ska uppnås. Därför används renderings-tekniker som att rendera reflektioner i en fjärdedels upplösning för att begränsa mängden strålar. De nya teknikerna inline strålspårning och hårdvarubaserad variable rate shading (VRS) kan användas för att minska antalet strålar ytterliggare. Syfte. Det första målet är att använda hårdvarubaserad VRS för att minska antalet strålar ytterligare jämfört med att rendera reflektioner i en fjärdedels upplösning, men samtidigt upprätthålla den visuella kvalitén. Det andra målet är att avgöra om inline strålspårning ger bättre prestanda än att använda strålgenererings shaders för reflektioner. Metod. För att svara på forskningsfrågorna ufördes experiment på en strålspårad reflektionspipeline med olika tekniker för att generera strålar. Teknikerna som testas är inline strålspårning, inline strålspårning kombinerat med VRS, samt strålgenererings shaders. Dessa jämförs och evalueras med prestandatester och bild-evaulatorn \FLIP. Resultat. Resultaten visar att minska mängden strålar med hårdvarubaserad VRS resulterar i en prestandaökning. Skillnaden i visuell kvalité mellan inline strålspårning kombinerat med VRS och tidigare tekniker är jämförbara. Prestandatesterna visar att inline strålspårning presterar värre än strålgenererings shaders vid ökad scenkomplexitet. Slutsatser. Slutsatsen är att hårdvarubaserad VRS kan användas för att minska antalet strålar och resultera i bättre prestanda, medan den visuella kvalitén är jämförbar med tidigare tekniker. Inline strålspårning ger inte bättre prestanda än strålgenererings shaders vid strålspårade reflektioner och liknande arbetsbelastning.
APA, Harvard, Vancouver, ISO, and other styles
10

Abidi, Faiz Abbas. "Remote High Performance Visualization of Big Data for Immersive Science." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78210.

Full text
Abstract:
Remote visualization has emerged as a necessary tool in the analysis of big data. High-performance computing clusters can provide several benefits in scaling to larger data sizes, from parallel file systems to larger RAM profiles to parallel computation among many CPUs and GPUs. For scalable data visualization, remote visualization tools and infrastructure is critical where only pixels and interaction events are sent over the network instead of the data. In this paper, we present our pipeline using VirtualGL, TurboVNC, and ParaView to render over 40 million points using remote HPC clusters and project over 26 million pixels in a CAVE-style system. We benchmark the system by varying the video stream compression parameters supported by TurboVNC and establish some best practices for typical usage scenarios. This work will help research scientists and academicians in scaling their big data visualizations for real time interaction.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

J'lali, Yousra. "DirectX 12: Performance Comparison Between Single- and Multithreaded Rendering when Culling Multiple Lights." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20201.

Full text
Abstract:
Background. As newer computers are constructed, more advanced and powerful hardware come along with them. This leads to the enhancement of various program attributes and features by corporations to get ahold of the hardware, hence, improving performance. A relatively new API which serves to facilitate such logic, is Microsoft DirectX 12. There are numerous opinions about this specific API, and to get a slightly better understanding of its capabilities with hardware utilization, this research puts it under some tests. Objectives. This article’s aim is to steadily perform tests and comparisons in order to find out which method has better performance when using DirectX 12; single-threading, or multithreading. For performance measurements, the average CPU and GPU utilizations are gathered, as well as the average FPS and the speed of which it takes to perform the Render function. When all results have been collected, the comparison between the methods are assessed. Methods. In this research, the main method which is being used is experiments. To find out the performance differences between the two methods, they must undergo different trials while data is gathered. There are four experiments for the single-threaded and multithreaded application, respectively. Each test varies in the number of lights and objects that are rendered in the simulation environment, gradually escalading from 50; then 100; 1000; and lastly, 5000. Results. A similar pattern was discovered throughout the experiments, with all of the four tests, where the multithreaded application used considerably more of the CPU than the single-threaded version. And despite there being less simultaneous work done by the GPU in the one-threaded program, it appeared to be using more GPU utilization than multithreading. Furthermore, the system with many threads tended to perform the Render function faster than its counterpart, regardless of which test was executed. Nevertheless, both applications never differed in FPS. Conclusion. Half of the hypotheses stated in this article were contradicted after some unexpected tun of events. It was believed that the multithreaded system would utilize less of the CPU and more of the GPU. Instead, the outcome contradicted the hypotheses, thus, opposing them. Another theory believed that the system with multiple threads would execute the Render function faster than the other version, a hypothesis that was strongly supported by the results. In addition to that, more objects and lights inserted into the scene did increased the applications’ utilization in both the CPU and GPU, which also supported another hypothesis. In conclusion, the multithreaded program performs faster but still has no gain in FPS compared to single-threading. The multithreaded version also utilizes more CPU and less GPU
APA, Harvard, Vancouver, ISO, and other styles
12

Hand, Randall Eugene. "A faster technique for rendering meshes in multiple display systems." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-04082002-165856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Fast, Tobias. "Alpha Tested Geometry in DXR : Performance Analysis of Asset Data Variations." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19730.

Full text
Abstract:
Background. Ray tracing can be used to achieve hyper-realistic 3D rendering but it is a computationally heavy task. Since hardware support for real-time ray tracing was released, the game industry has been introducing this feature into games. However, even modern hardware still experience performance issues when implementing common rendering techniques with ray tracing. One of these problematic techniques is alpha testing. Objectives. The thesis will investigate the following: 1) How the texture format of the alpha map and the number of alpha maps affect the rendering times. 2) How tessellation of the alpha tested geometry affects the performance and if tessellation has the potential to fully replace the alpha test from a performance perspective. Methods. A DXR 3D renderer will be implemented capable of rendering alpha tested geometry using an any-hit shader. The renderer was used to conduct a computational performance benchmark of the rendering times while varying texture and geometry data. Two alpha tested tree models were tessellated to various levels and their related textures were converted into multiple formats that could be used for the test scenes. Results & Conclusions. When the texture formats BC7, R(1xfloat32), and BC4 were used for the alpha map, the rendering times decreased in all cases, relative RGBA(4xfloat32). BC4 showed to give the best performance gain, decreasing the rendering times with up to 17% using one alpha map per model and up to 43% using eight alpha maps. When increasing the number of alpha maps used per model the rendering times increased with up to 52% when going from one alpha map to two. A large increase in rendering times was observed when going from three to four alpha maps in all cases. Using alpha testing on the tessellated model versions increased the rendering times in most cases, at most 135%. A decrease of up to 8% was however observed when the models were tessellated a certain amount. Turning off alpha testing gave a significant decrease in rendering allowing higher tessellated versions to be rendered for all models. In one case, while increasing the number of triangles with a factor of 78 the rendering times were still decreased by 30% relative to the original alpha test implementation. This suggests that pre-tessellated models could potentially be used to replace alpha tessellated geometry when performance is highly required.
Bakgrund. Strålspårning(Ray tracing) kan användas för att uppnå hyperrealistisk 3D-rendering, men det är en mycket tung beräkningsuppgift. Sedan hårdvarustöd för att utföra strålspårning i realtid lanserades har spelindustrin introducerat funktionen i spel. Trots modern hårdvara upplevers fortfarande prestandaproblem när vanliga renderingstekniker kombineras med strålspårning. En av dessa problematiska tekniker är alfa-testning(alpha testing). Syfte. Denna avhandling kommer att undersöka följande: 1) Hur texturformatet på alfamasken(alpha map) och hur antalet alfamaskar påverkar renderingstiderna. 2) På vilket sätt tesselering av den alfa-testade geometrin påverkar prestandan och om tesselering har potentialen att ersätta alfa-testet helt ur ett prestandaperspektiv. Metod. En DXR 3D-renderare kommer att implementeras som kan rendera alfatestad geometri med hjälp av en “Any-hit” shader. Renderaren användes för att mäta och jämföra renderingstider givet varierande textur- och geometri-data. Två alfaprövade trädmodeller tesselaterades till olika nivåer och deras relaterade texturer konverterades till fyra format som användes i testscenerna. Resultat & Slutsatser. När texturformaten BC7, R(1xfloat32) och BC4 användes för alfamasken visade alla en minskad renderingstid relativ RGBA (4xfloat32). BC4 gav bästa prestandaökningen och minskade renderingstiden med upp till 17% med en alfamask per modell och upp till 43% med åtta alfamasker. När antalet alfamasker som användes per modell ökade renderingstiderna med upp till 52% när alfamaskerna ökade från en till två. En stor ökning av renderingstiden observerades när alfamaskerna gick från tre till fyra i alla testfall. När alfatestning användes på de tesselerade modellversionerna ökade renderingstiderna i de flesta fall, som högst 135%. En minskning på upp till 8% observerades emellertid när modellerna tesselaterades till en viss grad. Att stänga av alfatestning gav en signifikant ökning av prestandan, vilket tillät högre tesselerade versioner att renderas för alla modeller. Samtidigt som antalet trianglar ökade med en faktor på 78, i ett av fallen, minskades renderingstiden med 30%. Detta antyder att förtesselerade modeller potentiellt kan användas för att ersätta alfatestad geometri när prestanda är ett högt krav.
APA, Harvard, Vancouver, ISO, and other styles
14

Thompson, Maria do Rosário. "Psychophysical evaluations of modulated color rendering for energy performance of LED-based architectural lighting." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38608.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2007.
Includes bibliographical references (p. 137-146).
This thesis is focused on the visual perception evaluation of colors within an environment of a highly automated lighting control strategy. Digitally controlled lighting systems equipped with light emitting diodes, LEDs, can produce a range of different qualities of light, adjustable to users' requirements. In this context of unparalleled controllability, a novel energy-saving lighting control concept inspired this research: strategic control of Red, Yellow, Green & Blue LEDs forming white light can further increase energy efficiency. The resulting (more efficient) white light, however, would have decreased "color rendering" (i.e. the ability of accurately reproduce the colors of illuminated objects). The notable point is that while color rendering is necessarily affected, the appearance and light levels of the white light can stay the same. But how objects' distorted colors are perceived within a real life architectural context is a key, ensuing question. This research investigated the hypothesis that a significant range of color distortions would be unnoticeable under a dynamically controlled LED system, when operating outside of users' main field of view. If successful, such control technique could minimize peak hours lighting energy waste, and potentially enable up to 25% of power reduction.
(cont.) Three incremental series of psychophysical experiments were performed based on subjective assessment of color changes under continuously modulated color rendering from white LEDs. Visual tests were carried out for central and peripheral vision on a full scale mockup of an architectural scenario. Results confirmed the fundamental hypothesis, showing that the majority of subjects did not detect the color changes in their periphery while the same color changes were noticeable with direct observation. The conclusion chapter provides fundamental guidelines for how to extrapolate the experimental results into real life and apply the data to architectural settings. Hypothetical architectural scenarios are presented and the potential for energy savings is discussed.
by Maria do Rosário Thompson.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Olofsson, Mikael. "Direct3D 11 vs 12 : A Performance Comparison Using Basic Geometry." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13598.

Full text
Abstract:
Context. Computer rendered imagery such as computer games is a field with steady development. To render games an application programming interface (API) is used to communicate with a graphical processing unit (GPU). Both the interfaces and processing units are a part of the steady development in order to be able to push the limits of graphical rendering. Objectives. This thesis investigates if the Direct3D 12 API provides higher rendering performance when compared to its predecessor Direct3D 11. Methods. The method used is an experiment, in which a benchmark rendering basic shaded geometry using both of the APIs while measuring their performance was developed. The focus was aimed at testing API interaction and comparing Direct3D 11 against Direct3D 12. Results. Statistics gained from the benchmark suggest that in this experiment Direct3D 11 offered the best rendering performance in the majority of the cases tested, although Direct3D 12 had specific scenarios where it performed better. Conclusions. As a conclusion the benchmark gave contradicting results when compared to other studies. This could be dependent on the implementation, software or hardware used. In the tests Direct3D 12 was closer to its Direct3D 11 counterpart when more cores were used. A platform with more processing cores available to execute in parallel could reveal if Direct3D 12 could offer better performance in that experimental setting. In this study Direct3D 12 was implemented as to imitate Direct3D 11. If the implementation was further aligned with Direct3D 12 recommendations other results might be observed. Further study could be conducted to give a better evaluation of rendering performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Karlsson, Axel, and Oscar Nordquist. "BabylonJS and Three.js : Comparing performance when it comes to rendering Voronoi height maps in 3D." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76449.

Full text
Abstract:
WebGL is a technique that allows the browser to run 3D applications with the help of the GPU. Voronoi diagrams are a set of polygons that can be used to illustrate worlds of islands. In an web-application using Voronoi Polygons to create two dimensional worlds there is a future vision to enable three dimensional behavior. There are multiple frameworks and libraries that can be used to simplify the process of creating 3D applications in the browser. Due to the fact that 3D applications can be performance demanding, an experiment was conducted with BabylonJS and Three.js. In order to evaluate which one of the two performed better, RAM, GPU and CPU were evaluated when translating two dimensional Voronoi heightmaps into a 3D application. The results from this stress test prove that Three.js outperformed BabylonJS.
APA, Harvard, Vancouver, ISO, and other styles
17

Dan, Sjödahl. "Cascaded Voxel Cone-Tracing Shadows : A Computational Performance Study." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18250.

Full text
Abstract:
Background. Real-time shadows in 3D applications have for decades been implemented with a solution called Shadow Mapping or some variant of it. This is a solution that is easy to implement and has good computational performance, nevertheless it does suffer from some problems and limitations. But there are newer alternatives and one of them is based on a technique called Voxel Cone-Tracing. This can be combined with a technique called Cascading to create Cascaded Voxel Cone-Tracing Shadows (CVCTS). Objectives. To measure the computational performance of CVCTS to get better insight into it and provide data and findings to help developers make an informed decision if this technique is worth exploring. And to identify where the performance problems with the solution lies. Methods. A simple implementation of CVCTS was implemented in OpenGL aimed at simulating a solution that could be used for outdoor scenes in 3D applications. It had several different parameters that could be changed. Then computational performance measurements were made with these different parameters set at different settings. Results. The data was collected and analyzed before drawing conclusions. The results showed several parts of the implementation that could potentially be very slow and why this was the case. Conclusions. The slowest parts of the CVCTS implementation was the Voxelization and Cone-Tracing steps. It might be possible to use the CVCTS solution in the thesis in for example a game if the settings are not too high but that is a stretch. Little time could be spent during the thesis to optimize the solution and thus it’s possible that its performance could be increased.
APA, Harvard, Vancouver, ISO, and other styles
18

Sjöstedt, Max. "Ramverk för rendering av heatmaps, en jämförande studie : Prestandajämförelse för ramverk på webben." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18646.

Full text
Abstract:
Olika typer av data används för att användare skall kunna presentera och dra beslut ifrån data, för att lättare förstå datan tillämpas visualisering som ett sätt att göra datan mer läsbar och lättförstådd. När data skall visualiseras på webben kan många olika teknologier användas för att rendera ut datan. I denna studie kommer ramverk för rendering av heatmaps jämföras sett till deras prestanda i form av renderingstid. De två applikationer som utvecklades var D3.js samt Three.js. Dessa jämfördes sedan i ett experiment där mätningar utfördes på de båda teknologierna. Resultatet blev att Three.js var mer lämpat för visualisering av heatmaps på webben.
APA, Harvard, Vancouver, ISO, and other styles
19

Johansson, Håkan. "Volume Raycasting Performance Using DirectCompute." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4977.

Full text
Abstract:
Volume rendering is quite an old concept of representing images, dating back to the 1980's. It is very useful in the medical field for visualizing the results of a computer tomography (CT) and magnet resonance tomography (MRT) in 3D. Apart from these two major applications for volume rendering, there aren’t many other fields of usage accept from tech demos. Volumetric data does not have any limitations to the shape of an object that ordinary meshes can have. A popular way of representing volume data is through an algorithm that is called volume raycasting. There is a big disadvantage with this algorithm, namely that it is computationally heavy for the hardware. However, there have been vast improvements of the graphic cards (GPUs) in recent years and with the first GPU implementation of volume raycasting in 2003, how does this algorithm perform on modern hardware? Can the performance of the algorithm be improved with the introduction of GPGPU (DirectCompute) in Directx 11? The performance results of the basic version and the DirectCompute version was compared in this thesis and revealed significant improvement in performance. Speedup was indeed possible when using DirectCompute to optimize volume raycasting.
Implementation, optimering och prestandamätning av en volume rendering algoritm som heter volume raycasting. Optimeringen är utförd med hjälp av DirectCompute i Directx 11.
APA, Harvard, Vancouver, ISO, and other styles
20

Lane, Tora. "Rendering the Sublime : A Reading of Marina Tsvetaeva's Fairy-Tale Poem The Swain." Doctoral thesis, Stockholm : Visby : Acta Universitatis Stockholmiensis ; Eddy.se [distributör], 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-31163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sipes, Jordan. "Streamsurface Smoke Effect for Visualizing Dragon Fly CFD Data in Modern OpenGL with an Emphasis on High Performance." Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1369083905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sjöberg, Joakim, and Filip Zachrisson. "A Performance Comparison of Dynamic- and Inline Ray Tracing in DXR : An application in soft shadows." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21833.

Full text
Abstract:
Background. Ray tracing is a tool that can be used to increase the quality of the graphics in games. One application in graphics that ray tracing excels in is generating shadows because ray tracing can simulate how shadows are generated in real life more accurately than rasterization techniques can. With the release of GPUs with hardware support for ray tracing, it can now be used in real-time graphics applications to some extent. However, it is still a computationally heavy task requiring performance improvements. Objectives. This thesis will evaluate the difference in performance of three raytracing methods in DXR Tier 1.1, namely dynamic ray tracing and two forms of inline ray tracing. To further investigate the ray-tracing performance, soft shadows will be implemented to see if the driver can perform optimizations differently (depending on the choice of ray-tracing method) on the subsequent and/or preceding API interactions. With the pipelines implemented, benchmarks will be performed using different GPUs, scenes, and a varying amount of shadow-casting lights. Methods. The scientific method is based on an experimental approach, using both implementation and performance tests. The experimental approach will begin by extending an in-house DirectX 12 renderer. The extension includes ray-tracing functionality, so that hard shadows can be generated using both dynamic- and the inline forms ray tracing. Afterwards, soft shadows are generated by implementing a state-of-the-art-denoiser with some modifications, which will be added to each ray-tracing method. Finally, the renderer is used to perform benchmarks of various scenes with varying amounts of shadow-casting lights and object complexity to cover a broad area of scenarios that could occur in a game and/or in other similar applications. Results and Conclusions. The results gathered in this experiment suggest that under the experimental conditions of the chosen scenes, objects, and number of lights, AMD’s GPUs were faster in performance when using dynamic ray tracing than using inline ray tracing, whilst Nvidia’s GPUs were faster when using inline ray tracing compared to when using dynamic ray tracing. Also, with an increasing amount of shadow-casting lights, the choice of ray-tracing method had low to no impact except for linearly increasing the execution time in each test. Finally, adding soft shadows(subsequent and preceding API interactions) also had low to no relative impact on the results depending on the different ray-tracing methods.
Bakgrund. Strålspårning (ray tracing) är ett verktyg som kan användas för att öka kvalitén på grafiken i spel. En tillämpning i grafik som strålspårning utmärker sig i är när skuggor ska skapas eftersom att strålspårning lättare kan simulera hur skuggor skapas i verkligheten, vilket tidigare tekniker i rasterisering inte hade möjlighet för. Med ny hårdvara där det finns support för strålspårning inbyggt i grafikkorten finns det nu möjligheter att använda strålspårning i realtids-applikationer inom vissa gränser. Det är fortfarande tunga beräkningar som behöver slutföras och det är därav att det finns behov av förbättringar.  Syfte. Denna uppsats kommer att utvärdera skillnaderna i prestanda mellan tre olika strålspårningsmetoder i DXR nivå 1.1, nämligen dynamisk strålspårning och två olika former av inline strålspårning. För att ge en bredare utredning på prestandan mellan strålspårningsmetoderna kommer mjuka skuggor att implementeras för att se om drivrutinen kan göra olika optimiseringar (beroende på valet av strålspårningsmetod) på de efterföljande och/eller föregående API anropen. Efter att dessa rörledningar (pipelines) är implementerade kommer prestandatester att utföras med olika grafikkort, scener, och antal ljus som kastar skuggor. Metod. Den vetenskapliga metoden är baserat på ett experimentellt tillvägagångssätt, som kommer innehålla både ett experiment och ett flertal prestandatester. Det experimentella tillvägagångssättet kommer att börja med att utöka en egenskapad DirectX 12 renderare. Utökningen kommer tillföra ny funktionalitet för att kunna hantera strålspårning så att hårda skuggor ska kunna genereras med både dynamisk och de olika formerna av inline strålspårning. Efter det kommer mjuka skuggor att skapas genom att implementera en väletablerad avbrusningsteknik med några modifikationer, vilket kommer att bli tillagt på varje strålspårningssteg. Till slut kommer olika prestandatester att mätas med olika grafikkort, olika antal ljus, och olika scener för att täcka olika scenarion som skulle kunna uppstå i ett spel och/eller i andra liknande applikationer.  Resultat och Slutsatser. De resultat från testerna i detta experiment påvisar att under dessa förutsättningar så är AMD’s grafikkort snabbare på dynamisk strålspårning än på inline strålspårning, samtidigt som Nvidias grafikkort är snabbare på inline strålspårning än på den dynamiska varianten. Ökandet av ljus som kastar skuggor påvisade låg till ingen förändring förutom ett linjärt ökande av exekveringstiden i de flesta testerna. Slutligen så visade det sig även att tillägget av mjuka skuggor (efterföljande och föregående API interaktioner) hade låg till ingen påverkan på valet av strålspårningsmetod.
APA, Harvard, Vancouver, ISO, and other styles
23

Bäck, Oscar, and Niklas Andersson. "Google AMP and what it can do for mobile applications in terms of rendering speed and user-experience." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17952.

Full text
Abstract:
On today’s web, a web page needs to load fast and have a great user experiencein order to be successful. The faster the better. A server side rendered webpage can have a prominent initial load speed while a client side rendered webpage will have a great interactive user experience. When combining the two,some users with a bad internet connection or a slow device could receive a pooruser experience. A new technology called Amplified Mobile Pages (AMP) wascreated by Google to help combat this issue.The authors of this report gives an answer to if Google AMP could maintain theuser experience while still contributing with a fast initial load speed for applica-tions. To do this, we conducted an experiment through creating a Google AMPapplication and compared it to another application using a different renderingengine called Pug. We have also measured the metrics: page load time, speedindex and application size between the two applications. To fully understandthe AMP format, the authors conducted a literature study, to further strengthentheir findings.Google AMP is a great technology but it can still grow to become better. Theformat could increase the speed of a website, however the same result could beachieved without AMP if focus was set on writing a fast application. From theexperiment, the authors concluded that Google AMP takes a great time to learnbecause of its own version of JavaScript through modules. The format also hasa different structure than standard HTML. From the tests, a smaller applica-tion does not favor the implementation of AMP. We did however derive fromthe experiment and the literature study that bigger applications could benefitfrom the perks of AMP and could therefor be a potential choice for old and newapplications.
APA, Harvard, Vancouver, ISO, and other styles
24

Petersson, Tommy, and Marcus Lindeberg. "Performance aspects of layered displacement blending in real time applications." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3542.

Full text
Abstract:
The purpose of this thesis is to investigate performance aspects of layered displacement blending; a technique used to render realistic and transformable objects in real time rendering systems using the GPU. Layered displacement blending is done by blending layers of color maps and displacement maps together based on values stored in an influence map. In this thesis we construct a theoretical and practical model for layered displacement blending. The model is implemented in a test bed application to enable measuring of performance aspects. The implementation is fed input with variations in triangle count, number of subdivisions, texture size and number of layers. The execution time for these different combinations are recorded and analyzed. The recorded execution times reveal that the amount of layers associated with an object has no impact on performance. Further analysis reveals that layered displacement blending is heavily dependent on the triangle count in the input mesh. The results show that layered displacement blending is a viable option to representing transformable objects in real time applications with respect to performance. This thesis provides; a theoretical model for layered displacement blending, an implementation of the model using the GPU and measurements of that implementation.
APA, Harvard, Vancouver, ISO, and other styles
25

Martínez, Ana Laura, and Natali Arvidsson. "Balance Between Performance and Visual Quality in 3D Game Assets : Appropriateness of Assets for Games and Real-Time Rendering." Thesis, Uppsala universitet, Institutionen för speldesign, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413871.

Full text
Abstract:
This thesis explores the balance between visual quality and the performance of a 3D object for computer games. Additionally, it aims to help new 3D artists to create assets that are both visually adequate and optimized for real-time rendering. It further investigates the differences in the judgement of the visual quality of thosethat know computer graphics, and thosenot familiar with it. Many explanations of 3D art optimization are often highly technical and challenging for graphic artists to grasp. Additionally, they regularly neglect the effects of optimization to the visual quality of the assets. By testing several 3D assets to measure their render time while using a survey to gather their visual assessments, it was discovered that 3D game art is very contextual. No definite or straightforward way was identified to find the balance between art quality and performance universally. Neither when it comes to performance nor visuals. However, some interesting findings regarding the judgment of visual quality were observed and presented.
Den här uppsatsen utforskar balansen mellan visuell kvalitéoch prestanda i 3D modeller för spel. Vidare eftersträvar den att utgöra ett stöd för nya 3D-modelleingskonstnärer för att skapa modeller som är både visuellt adekvata och optimerade för att renderas i realtid. Dessutom undersöks skillnaden mellan omdömet av den visuella kvalitén mellan de som är bekanta med 3D datorgrafik och de som inte är det. Många förklaringar gällande optimering av 3D grafik är högst tekniska och utgör en utmaning för grafiker att förståsig på och försummar dessutom ofta effekten av hur optimering påverkar resultatet rent visuallet. Genom att testa ett flertal 3D modeller, mäta tiden det tar för dem att renderas, samt omdömen gällande visuella intryck, drogs slutsatsen att bedömning av 3D modellering för spel är väldigt kontextuell. Inget definitivt och enkelt sätt att hitta balansen mellan visuella kvalitén upptäcktes. Varken gällande prestanda eller visuell kvalité. Däremot gjordes några intressanta upptäckter angående bedömningen av den visuella kvalitén som observerades och presenterades.
APA, Harvard, Vancouver, ISO, and other styles
26

Regmi, Saroj Sharan, and Suyog Man Singh Adhikari. "Network Performance of HTML5 Web Application in Smartphone." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3369.

Full text
Abstract:
Hypertext markup language 5 (HTML5), a new standard for HTML, enriched with additional features is expected to override all the basic underlying overhead needed by other applications. By the advent of new extension, HTML5, the web’s basic language is transplanted from a simple page layout into rich web application development language. Furthermore, with the release of HTML5, traditional browsing is expected to change and modify accordingly and on the other hand the potential users will have an alternative rather than sticking in platform and OS dependent native applications. This thesis deals with the readiness assessment of HTML5 with regard to different smart phones- Android and Windows. In order to visualize the fact, we analyzed different constraints like DNS lookup time, page loading time, memory and CPU consumption associated with two applications-Flash and HTML5 running right through the smart phones. Furthermore, the comparative analysis is performed in different network scenarios- Wi-Fi and 3G and user experience is estimated based on network parameters. From the experiments and observations taken, we found that android phones provide better support for HTML5 web applications than windows mobile devices. Also, the HTML5 applications loading time is limited by the browser rendering time rather that the content loading time from the network and is also dependent on hardware configuration of device used.
APA, Harvard, Vancouver, ISO, and other styles
27

Najman, Pavel. "Bezsnímkové renderování." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236594.

Full text
Abstract:
The aim of this work is to create a simple raytracer with IPP library, which will use the frameless rendering technique. The first part of this work focuses on the raytracing method. The next part analyzes the frameless rendering technique and its adaptive version with focus on adaptive sampling. Third part describes the IPP library and implementation of a simple raytracer using this library. The last part evaluates the speed and rendering quality of the implemented system.
APA, Harvard, Vancouver, ISO, and other styles
28

Luttu, Johan, and Oscar Rosquist. "Analysis of the performance difference between server-side and client-side rendering for data visualization in real-time using D3.js." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208655.

Full text
Abstract:
Real-time data visualization has the ability to visualize huge amounts of data into understandable graphics and allows for immediate action to be taken on emerging trends. This report aims to compare the performance of real-time server-side and client-side rendering when using the data visualization framework D3.js. To perform this comparison, two applications were constructed to measure the performance of both sides. The results points towards client-side data visualization being faster than server-side with D3.js when using JSDOM at server-side. Therefore we conclude that using the same, or a very similar environment as the applications in this study, client-side data visualization offers better performance. Server-side rendering still offered possibilities for real-time data visualization. However, due to the amount of different influencing factors, we cannot confirm the results to hold up in different situations with other environments than the one that has been used in this study.
Visualisering i realtid gör det möjligt att förvandla stora mängder data till lättbegripliga diagram och tillåter snabba handlingar att utföras utefter framträdande mönster. Denna rapport har som mål att jämföra prestandan av visualisering i realtid på klientsidan gentemot serversidan med visualiseringsramverket D3.js. För att utföra denna jämförelse så har två applikationer skapats för att mäta prestandan av de två olika strategierna. Resultaten pekar mot att visualisering, med D3.js, av data i realtid på klientsidan är snabbare än på serversidan vid användning av JSDOM för rendering på serversidan. Slutsatsen vi kommer fram till är att när samma, eller liknande, miljö används som för applikationerna i denna studie så ger visualisering på klientsidan en bättre prestanda. Rendering på serversidan erbjöd ändå möjligheter för realtidsvisualisering till viss del. Emellertid, på grund av flera olika påvärkande faktorer kan vi inte bedöma huruvida resultaten kommer förbli detsamma i annorlunda situationer med andra miljöer än vad som använts i denna studie.
APA, Harvard, Vancouver, ISO, and other styles
29

Turpeinen, Max. "A Performance Comparison for 3D Crowd Rendering using an Object-Oriented system and Unity DOTS with GPU Instancing on Mobile Devices." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280845.

Full text
Abstract:
This paper aims to address what is a suitable programming paradigm for real-time crowd rendering from a performance standpoint, with smartphones as the target platform. Among the most prominent and intuitive programming paradigms is the object-oriented (OO) one, with data-oriented designs be- coming more common in recent years. In this paper, Unity’s GameObject approach built on an object-oriented foundation is compared with their DOTS system using GPU instancing, arranging different test scenarios later built using Xcode on an iPhone 6S and an iPhone XR. The results from the different scenarios and builds are represented through multiple graphs focusing on the obtained frame rate, CPU usage and GPU usage. The DOTS system proved to outperform the object- oriented system in six out of eight scenarios with the iPhone XR yielding better performance. With DOTS currently being under development, several acceleration and enhancement techniques are yet to be integrated such as culling or LoD, which currently can be used by its counterpart. The OO sys- tem is more robust with variation whereas the DOTS system is better suited when the number of characters increases.
Denna rapport ämnar sig att ta itu med vad som är lämplig programmeringparadigm för att rendera folkmassor i realtid ur ett prestandaperspektiv, med smartphones som målplattform. Bland de mest framstående och intuitiva programmeringsparadigmerna finns den objekt-orienterade (OO), men under de senaste åren har data-orienterad design blivit allt mer vanlig. I den här rapporten jämförs Unity’s Game Object-strategi som bygger på en objekt-orienterad grund med deras DOTS-system med GPU-instansiering genom olika arrangerade testscenarier byggda med Xcode på en iPhone 6S och en iPhone XR. Resultaten från olika scenarier representeras genom flera diagram med fokus på den erhållna bildhastigheten, CPU-användningen och GPU-användningen. DOTS-systemet visade sig överträda det objekt-orienterade systemet i sex av åtta scenarier med iPhone XR som den bättre presterande smartphonen. Med DOTS förnärvarande under utveckling, saknas ännu flera förbättringstekniker som ännu inte ska integreras, såsom culling eller LoD, som för närvarande kan användas av dess motsvarighet. OO-systemet är mer robust med variation medan DOTS-systemet passar bättre när antalet karaktärer ökar.
APA, Harvard, Vancouver, ISO, and other styles
30

Pei, Mo Mo. "Modeling the performance of many-core programs on GPUs with advanced features." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kukla, Michal. "Ray-tracing s knihovnou IPP." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237265.

Full text
Abstract:
Master thesis is dealing with design and implementation of ray-tracing and path-tracing using IPP library. Theoretical part discusses current trends in acceleration of selected algorithms and also possibilities of parallelization. Design of ray-tracing and path-tracing algorithm and form of parallelization are described in proposal. This part also discusses implementation of adaptive sampling and importance sampling with Monte Carlo method to accelerate path-tracing algorithm. Next part is dealing with particular steps in implementation of selected rendering methods regarding IPP library. Implementation of network interface using Boost library is also discussed. At the end, implemented methods are subjected to performance and quality test. Final product of this thesis is server aplication capable of handling multiple connections which provides visualisation and client application which implements ray-tracing and path-tracing.
APA, Harvard, Vancouver, ISO, and other styles
32

Persson, Gustav, and Jonathan Udd. "Ray Tracing on GPU : Performance comparison between the CPU and the Compute Shader with DirectX 11." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2514.

Full text
Abstract:
The game industry have always looked for rendering techniques that makes the games as good looking and realistic as possible. The common approach is to use triangles built up by vertices and apply many different techniques to make it look as good as possible. When triangles are used to draw objects, there is always edges and those edges often make the objects look less realistic than desired. To reduce these visible edges the amount of triangles for an object have to be increased, but with more triangles more processing power from the graphics cards is needed. Another way to approach rendering is ray tracing which can render an extremely photo realistic image but to the cost of unbearable low performance if you would use it in a realtime application. The reason raytracing is so slow is the massive amount of calculations that needs to be made. In DirectX 11 a few new shaders where announced and one of them were the compute shader, the compute shader allows you to calculate data on the graphics card which is not bound to the pipeline. The compute shader allows you to use the hundreds of cores that the graphic card has and is therefore well suited for a raytracing algorithm. One application is used to see if the hypothesis is correct. A flag is used to define if the application runs on the CPU and the GPU. The same algorithm is used in both versions. Three test where done on each processing unit to confirm the hypothesis. Three more tests where done on the GPU to see how the performance scaled on the GPU depending on the number of rendered objects. The tests proved throughout that the compute shader performs considerably better than the CPU when running our ray tracing algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

Waxstein, Christine Michele. "Digital Illustration: The Costume Designer’s Process For East Tennessee State University’s Spring Dance Concert 2012." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etd/1504.

Full text
Abstract:
This paper's objective is to document the research and developmental processes of creating East Tennessee State University's Spring Dance Concert 2012 costume designs and renderings. This thesis describes design creation from research stage to idea formulation to the conception of costumes using inspirational images, illustrations, and performance photos and videos. The show was a challenging undertaking because it involved the collaboration of many in a compressed timeframe: 1 artistic director, 9 choreographers, 20 dances, 46 performers, 10 lighting designers, 1 costume designer, and 3 weeks to put it all together. Incorporating digital technology into the rendering process saved time, expenses, and helped clarify the designer's choices. This paper reflects the 2-year study of incorporating digital technology into the rendering process, culminating in the costume design for the Spring Dance Concert 2012.
APA, Harvard, Vancouver, ISO, and other styles
34

Sarton, Jonathan. "Visualisations interactives haute-performance de données volumiques massives : une approche out-of-core multi-résolution basée GPUs." Thesis, Reims, 2018. http://www.theses.fr/2018REIMS022/document.

Full text
Abstract:
Les travaux de cette thèse s'inscrivent dans le cadre du projet PIA2 3DNeuroSecure. Ce dernier vise à proposer un système collaboratif de navigation multi-échelle interactive dans des données visuelles massives (Visual Big Data) ayant pour cadre applicatif l'imagerie biomédicale 3D ultra-haute résolution (ordre du micron) possiblement multi-modale. En outre, ce système devra être capable d'intégrer divers traitements et/ou annotations (tags) au travers de ressources HPC distantes. Toutes ces opérations doivent être envisagées sans possibilité de stockage complet en mémoire (techniques out-of-core : structures pyramidales, tuilées, … avec ou sans compression …). La volumétrie des données images envisagées (Visual Big Data) induit par ailleurs le découplage des lieux de capture/imagerie/génération (histologie, confocal, imageurs médicaux variés, simulation …), de ceux de stockage et calcul haute performance (data center) mais aussi de ceux de manipulation des données acquises (divers périphériques connectés, mobiles ou non, tablette, PC, mur d’images, salle de RV …). La visualisation restituée en streaming à l’usager sera adaptée à son périphérique, tant en termes de résolution (Full HD à GigaPixel) que de rendu 3D (« à plat » classique, en relief stéréoscopique à lunettes, en relief autostéréoscopique sans lunettes). L'ensemble de ces développements pris en charge par le CReSTIC avec l'appui de la MaSCA (Maison de la Simulation de Champagne-Ardenne) se résument donc par : - la définition et la mise en oeuvre des structures de données adaptées à la visualisation out-of-core des visual big data (VBD) ciblées - l’adaptation des traitements spécifiques des partenaires comme des rendus 3D interactifs à ces nouvelles structures de données - les choix techniques d’architecture pour le HPC et la virtualisation de l’application de navigation pour profiter au mieux des ressources du datacanter local ROMEO. Le rendu relief avec ou sans lunettes, avec ou sans compression du flux vidéo relief associé seront opérés au niveau du logiciel MINT de l’URCA qui servira de support de développement
These thesis studies are part of the PIA2 project 3DNeuroSecure. This one aims to provide a collaborative system of interactive multi-scale navigation within visual big data (VDB) with ultra-high definition (tera-voxels), potentially multimodal, 3D biomedical imaging as application framework. In addition, this system will be able to integrate a variety of processing and/or annotations (tags) through remote HPC resources. All of these treatments must be possible in an out-of-core context. Because of the visual big data, we have to decoupled the location of acquisition from ones of storage and high performance computation and from ones for the manipulation of the data (various connected devices, mobile or not, smartphone, PC, large display wall, virtual reality room ...). The streaming visualization will be adapted to the user device in terms of both resolution (Full HD to GigaPixel) and 3D rendering (classic rendering on 2D screens, stereoscopic with glasses or autostereoscopic without glasses). All these developments supported by the CReSTIC with the support of MaSCA (Maison de la Simulation de Champagne-Ardenne) can therefore be summarized as: - the definition and implementation of the data structures adapted to the out-of-core visualization of the targeted visual big data. - the adaptation of the specific treatments partners, like interactive 3D rendering, to these new data structures. - the technical architecture choices for the HPC and the virtualization of the navigation software application, to take advantage of "ROMEO", the local datacenter. The auto-/stereoscopic rendering with or without glasses will be operated within the MINT software of the "université de Reims Champagne-Ardenne"
APA, Harvard, Vancouver, ISO, and other styles
35

Costa, Fernanda Nepomuceno. "Processo de produção de revestimento de fachada de argamassa : problemas e oportunidades de melhoria." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/10183.

Full text
Abstract:
A argamassa é um material largamente utilizado na construção civil, desempenhando diversas funções. Entretanto, muitas falhas vêm sendo observadas nos revestimentos de argamassa, principalmente nas fachadas. O aparecimento de manifestações patológicas em um edifício compromete a sua estética e o conforto ambiental, ocasionando uma desvalorização do mesmo perante o mercado, aumento insatisfação dos usuários e também elevados gastos com reparos e manutenção. Outro problema é a grande incidência de perdas de materiais, que resultam em prejuízos financeiros às empresas, além de acarretar a geração de entulho, que muitas vezes não recebe o devido tratamento e disposição final, tendo um impacto negativo também no meio ambiente. O objetivo dessa dissertação é identificar as causas do baixo desempenho de revestimentos de fachada de argamassa em edifícios e propor diretrizes para melhoria deste processo. Foram desenvolvidos oito estudos de caso em empresas construtoras de Porto Alegre - RS. Para a coleta de dados foram utilizadas várias ferramentas de descrição de processos e de avaliação do seu desempenho.Os principais resultados indicaram uma série de problemas no processo: alta variabilidade nas espessuras de revestimento, nos traços utilizados durante a confecção das argamassas e nos métodos de produção, inclusive dentro de uma mesma obra, elevadas perdas de materiais e de mão-de-obra, baixa produtividade, decorrente do inadequado dimensionamento das equipes de produção e de problemas no sistema de movimentação e armazenamento de materiais, e manifestações patológicas em revestimentos recentemente concluídos. Ao final do trabalho, é proposto um método de avaliação do processo de revestimento de argamassa em fachadas, são identificadas e descritas boas práticas introduzidas pelas empresas e são sugeridas melhorias no processo estudado.
Mortar is a widely used material in various functions in the construction industry. Nevertheless, many fails have been observed in rendering, mainly in façades. The incidence of building pathologies negatively affects its aesthetics and environmental comfort, and may result in a reduction of its market value, low degree of user satisfaction, and also high repair and maintenance costs. Another problem is the high incidence of materials waste, that causes financial losses, as well as generates site waste, which often does not receive an adequate treatment and final disposal, having also a negative impact in the environment. The main objective of this dissertation is to identify the main causes of low performance of building external rendering in mortar and to propose guidelines for improving this process. Case studies were carried out in eight different construction companies form Porto Alegre – RS. Several data collection tools were used for describing the main processes involved and assessing their performance. The investigation pointed out the problems in this process: high variability in rendering thickness, in mortar proportions, and in production methods, even in the same construction site; high percentage of material waste and low labor productivity, due to the inadequate definition of production teams and drawbacks in the material transportation and storage system; and building pathologies in recently concluded rendering. At the end, this study proposes a method for evaluating the process of mortar rendering in façades, identifies and describes best practices introduces by the companies and suggests improvements in that process.
APA, Harvard, Vancouver, ISO, and other styles
36

Lind, Fredrik, and Escalante Andrés Diaz. "Maximizing performance gain of Variable Rate Shading tier 2 while maintaining image quality : Using post processing effects to mask image degradation." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21868.

Full text
Abstract:
Background. Performance optimization is of great importance for games as it constrains the possibilities of content or complexity of systems. Modern games support high resolution rendering but higher resolutions require more pixels to be computed and solutions are needed to reduce this workload. Currently used methods include uniformly lowering the shading rates across the whole screen to reduce the amount of pixels needing computation. Variable Rate Shading is a new hardware supported technique with several functionality tiers. Tier 1 is similar to previous methods in that it lowers the shading rate for the whole screen. Tier 2 supports screen space image shading. With tier 2 screen space image shading, various different shading rates can be set across the screen which gives developers the choice of where and when to set specific shading rates. Objectives. The aim of this thesis is to examine how close Variable Rate Shading tier 2 screen space shading can come to the performance gains of Variable Rate Shading tier 1 while trying to maintain an acceptable image quality with the help of commonly used post processing effects. Methods. A lightweight scene is set up and Variable Rate Shading tier 2 methods are set to an acceptable image quality as baseline. Evaluation of performance is done by measuring the times of specific passes required by and affected by Variable Rate Shading. Image quality is measured by capturing sequences of images with no Variable Rate Shading on as reference, then with Variable Rate Shading tier 1 and several methods with tier 2 to be compared with Structural Similarity Index. Results. Highest measured performance gains from tier 2 was 28.0%. The result came from using edge detection to create the shading rate image in 3840x2160 resolution. This translates to 36.7% of the performance gains of tier 1 but with better image quality with SSIM values of 0.960 against tier 1’s 0.802, which corresponds to good and poor image quality respectively. Conclusions. Variable Rate Shading tier 2 shows great potential in increasing performance while maintaining image quality, especially with edge detection. Postprocessing effects are effective at maintaining a good image quality. Performance gains also scale well as they increase with higher resolutions.
Bakgrund. Prestandaoptimering är väldigt viktigt för spel eftersom det kan begränsa möjligheterna av innehåll eller komplexitet av system. Moderna spel stödjer rendering för höga upplösningar men höga upplösningar kräver beräkningar för mera pixlar och lösningar behövs för att minska arbetsbördan. Metoder som för närvarande används omfattar bland annat enhetlig sänkning av skuggningsförhållande över hela skärmen för att minska antalet pixlar som behöver beräkningar. Variable Rate Shading är en ny hårdvarustödd teknik med flera funktionalitetsnivåer. Nivå 1 är likt tidigare metoder eftersom skuggningsförhållandet enhetligt sänks över hela skärmen. Nivå 2 stödjer skärmrymdsbildskuggning. Med skärmrymdsbildskuggning kan skuggningsförhållanden varieras utspritt över skärmen vilket ger utvecklare valmöjligheter att bestämma var och när specifika skuggningsförhållanden ska sättas. Syfte. Syftet med examensarbetet är att undersöka hur nära Variable Rate Shading nivå 2 skärmrymdsbildskuggning kan komma prestandavinsterna av Variable Rate Shading nivå 1 samtidigt som bildkvaliteten behålls acceptabel med hjälp av vanligt använda efterbearbetningseffekter. Metod. En simpel scen skapades och metoder för Variable Rate Shading nivå 2 sattes till en acceptabel bildkvalitet som utgångspunkt. Utvärdering av prestanda gjordes genom att mäta tiderna för specifika pass som behövdes för och påverkades av Variable Rate Shading. Bildkvalitet mättes genom att spara bildsekvenser utan Variable Rate Shading på som referensbilder, sedan med Variable Rate Shading nivå 1 och flera metoder med nivå 2 för att jämföras med Structural Similarity Index. Resultat. Högsta uppmätta prestandavinsten från nivå 2 var 28.0%. Resultatet kom ifrån kantdetektering för skapandet av skuggningsförhållandebilden, med upplösningen 3840x2160. Det motsvarar 36.7% av prestandavinsten för nivå 1 men med mycket bättre bildkvalitet med SSIM-värde på 0.960 gentemot 0.802 för nivå 1, vilka motsvarar bra och dålig bildkvalitet. Slutsatser. Variable Rate Shading nivå 2 visar stor potential i prestandavinster med bibehållen bildkvalitet, speciellt med kantdetektering. Efterbearbetningseffekter är effektiva på att upprätthålla en bra bildkvalitet. Prestandavinster skalar även bra då de ökar vid högre upplösningar.
APA, Harvard, Vancouver, ISO, and other styles
37

Göransson, Jonas Alexander. "A TEMPORAL STABLE DISTANCE TO EDGE ANTI-ALIASING TECHNIQUE FOR GCN ARCHITECTURE." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10456.

Full text
Abstract:
Context. Aliasing artifacts are a present problem in both the game industryand the movie industry. With the GCN (Graphics Core Next) architectureused on both new generation of consoles; Xbox One and Playstation 4, aunified Anti-Aliasing solution can be constructed with high performance,temporal stable edges and satisfying visual fidelity. Objective. This thesis aims to implement several prototypes which willbe utilizing GCN architecture to solve aliasing artifacts such as temporalstability. Method. By doing performance measurements, a survey and an experimenton the constructed prototypes and current state of the art solutionsthis thesis will create both a benchmark between given state of the art solutionsfor the industry and at the same time evaluate the new solutions givenin this thesis. Result. With having potential of being the fastest Anti-Aliasing solutionin the field it does not only bring high performance, but also very temporalstable edges and satisfying visual quality. Conclusion. If not used as a standalone solution, the prototype can be decoupledfrom GCN specific features and be a very suitable complement forMulti Sample Anti-Aliasing which can not handle alpha clipped edges.
APA, Harvard, Vancouver, ISO, and other styles
38

Asano, Naira Ery. "Tecnologia construtiva de revestimento externo de argamassa com projeção contínua." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3153/tde-24062016-151136/.

Full text
Abstract:
Os métodos construtivos de revestimento externo produzidos com argamassa vêm evoluindo ao longo do tempo, apresentando como maior mudança a substituição da aplicação manual pela projeção mecânica. Atualmente destaca-se no mercado o sistema de execução de revestimento externo com argamassa industrializada e projeção mecânica contínua com bombas helicoidais. Por se tratar de uma tecnologia ainda não muito utilizada pelas construtoras brasileiras, faltam dados confiáveis acerca de seu potencial de ganho de produtividade, redução de perdas, diminuição de contingente de mão de obra, exigências de infraestrutura para aplicação e custos envolvidos. Sem parâmetros confiáveis, adotar a tecnologia significa assumir um nível de risco elevado e isto dificulta a tomada de decisão por parte das construtoras e, por consequência, dificulta-se a evolução tecnológica. Buscando contribuir para o necessário avanço nas tecnologias de produção de revestimentos de edifícios, o objetivo desta pesquisa é estabelecer parâmetros em relação à tecnologia de produção de revestimentos de fachada que empregam argamassa com projeção contínua. Para tanto, buscou-se informações em referências como teses, dissertações, textos técnicos, normas nacionais, dentre outras, bem como, acompanhou-se e avaliou-se os resultados da implantação de um método construtivo de revestimento de argamassa com projeção mecânica contínua em uma construtora de São Paulo. Foram realizados um protótipo e um piloto que contribuíram para o desenvolvimento da tecnologia por meio de apresentação de soluções para os problemas encontrados, do levantamento de melhores práticas e de dados para o cálculo de índices de produtividade e perda. Buscou-se, portanto, a consolidação da tecnologia de projeção contínua na construtora anteriormente mencionada e no mercado em geral.
The external rendering constructive methods produced with mortar have evolved over time and the greater change in it was the replacement of manual application by mechanical projection. The currently most prominent system in the market is the outer rendering execution system with industrialized mortar and continuous mechanical projection with helical pumps. Since this technology is not widely used by Brazilian construction companies, there is a lack of reliable data regarding its productivity gain potential, loss reduction, reduction of hand quota of work, infrastructure required for application and costs involved. Without reliable parameters, the decision to implement this technology is highly risky which does not encourage constructions companies to decide to apply it and, therefore, makes its technological evolution harder to be achieved. Aiming to contribute to the necessary advancement in technology for the production of rendering, this research has the objective to establish parameters to the rendering technology that uses industrialized mortar with continuous projection. With this purpose, it was used as source and reference the specialized literature such as thesis, dissertations and scientific papers, as well as both national norms. In addition, the research included the monitoring and assessment of the results from the implementation of a rendering execution method with continuous mechanical projection in a construction company in Sao Paulo. A prototype and a pilot scale were built and they contributed to the development of this technology by presenting solutions for problems identified and through the mapping of best practices and gathering data that allowed the calculation of productivity and loss rates. The goal was, thus, to consolidate the continuous projection technology in the above-mentioned construction company as well as in the market in a broader way.
APA, Harvard, Vancouver, ISO, and other styles
39

Renault, Lenny. "Neural audio synthesis of realistic piano performances." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS196.

Full text
Abstract:
Musicien et instrument forment un duo central de l'expérience musicale.Indissociables, ils sont les acteurs de la performance musicale, transformant une composition en une expérience auditive émotionnelle. Pour cela, l'instrument est un objet sonore que le musicien contrôle pour retranscrire et partager sa compréhension d'une œuvre musicale. Accéder aux sonorités d'un tel instrument, souvent issus de facture poussée, et à sa maîtrise de jeu, requiert des ressources limitant l'exploration créative des compositeurs. Cette thèse explore l'utilisation des réseaux de neurones profonds pour reproduire les subtilités introduites par le jeu du musicien et par le son de l'instrument, rendant la musique réaliste et vivante. En se focalisant sur la musique pour piano, le travail réalisé a donné lieu à un modèle de synthèse sonore pour piano ainsi qu'à un modèle de rendu de performances expressives. DDSP-Piano, le modèle de synthèse de piano, est construit sur l'approche hybride de Traitement du Signal Différentiable (DDSP) permettant d'inclure des outils de traitement du signal traditionnel dans un modèle d'apprentissage profond. Le modèle prend des performances symboliques en entrée, et inclut explicitement des connaissance spécifiques à l'instrument, telles que l'inharmonicité, l'accordage et la polyphonie. Cette approche modulaire, légère et interprétable synthétise des sons d'une qualité réaliste tout en séparant les différents éléments constituant le son du piano. Quant au modèle de rendu de performance, l'approche proposée permet de transformer des compositions MIDI en interprétations expressives symboliques. En particulier, grâce à un entraînement adverse non-supervisé, elle dénote des travaux précédents en ne s'appuyant pas sur des paires de partitions et d'interprétations alignées pour reproduire des qualités expressives. La combinaison des deux modèles de synthèse sonore et de rendu de performance permettrait de synthétiser des interprétations expressives audio de partitions, tout en donnant la possibilité de modifier, dans le domaine symbolique, l'interprétation générée
Musician and instrument make up a central duo in the musical experience.Inseparable, they are the key actors of the musical performance, transforming a composition into an emotional auditory experience. To this end, the instrument is a sound device, that the musician controls to transcribe and share their understanding of a musical work. Access to the sound of such instruments, often the result of advanced craftsmanship, and to the mastery of playing them, can require extensive resources that limit the creative exploration of composers.This thesis explores the use of deep neural networks to reproduce the subtleties introduced by the musician's playing and the sound of the instrument, making the music realistic and alive. Focusing on piano music, the conducted work has led to a sound synthesis model for the piano, as well as an expressive performance rendering model.DDSP-Piano, the piano synthesis model, is built upon the hybrid approach of Differentiable Digital Signal Processing (DDSP), which enables the inclusion of traditional signal processing tools into a deep learning model. The model takes symbolic performances as input and explicitly includes instrument-specific knowledge, such as inharmonicity, tuning, and polyphony. This modular, lightweight, and interpretable approach synthesizes sounds of realistic quality while separating the various components that make up the piano sound. As for the performance rendering model, the proposed approach enables the transformation of MIDI compositions into symbolic expressive interpretations.In particular, thanks to an unsupervised adversarial training, it stands out from previous works by not relying on aligned score-performance training pairs to reproduce expressive qualities. The combination of the sound synthesis and performance rendering models would enable the synthesis of expressive audio interpretations of scores, while enabling modification of the generated interpretations in the symbolic domain
APA, Harvard, Vancouver, ISO, and other styles
40

Hao, Priscilla Ruth. "An Interpretation of Modern: Costume Designs for An Adaptation of Shakespeare's The Two Gentlemen of Verona." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1486.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vanek, Juraj. "Měření výkonnosti grafického akcelerátoru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237240.

Full text
Abstract:
This paper deals with possibilities and functions of modern graphic accelerators and with measuring performance under OpenGL interface. Widespread algorithms to render scene in real-time are used. It focuses on how to test every part of accelerator's graphic pipeline as well as measure performance in rendering of advanced effects and theoretical speed at general purpose calculations through graphic processor. This testing is realized by implementing multiple test series and their further evaluation. Final application enables setting of test parameters and outputs a score, by which is possible to judge accelerator's performance in comparison among themselves.
APA, Harvard, Vancouver, ISO, and other styles
42

Rabello, Guilherme Picanço. "Processamento remoto em solução para interação com ambientes arquitetônicos 3D através de tablets." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/7840.

Full text
Abstract:
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-10-05T18:53:36Z No. of bitstreams: 1 DissGPR.pdf: 4260232 bytes, checksum: 502932c8b3d277cbd1c02a8b1b48b913 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:18:54Z (GMT) No. of bitstreams: 1 DissGPR.pdf: 4260232 bytes, checksum: 502932c8b3d277cbd1c02a8b1b48b913 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:19:05Z (GMT) No. of bitstreams: 1 DissGPR.pdf: 4260232 bytes, checksum: 502932c8b3d277cbd1c02a8b1b48b913 (MD5)
Made available in DSpace on 2016-10-13T20:19:14Z (GMT). No. of bitstreams: 1 DissGPR.pdf: 4260232 bytes, checksum: 502932c8b3d277cbd1c02a8b1b48b913 (MD5) Previous issue date: 2016-03-07
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
The evolution of Computer Graphics, supported by the computational development, enables increasingly advanced visual creations and promotes benefits to various areas such as: Art, architecture, Product Design, Visual Design, Games, Movies, Engineering, GIS and Medicine. However, the processing power needed for applications that require a high degree of realism, restricts the use of these applications to a small portion of the entire range of the computing devices in use by the population. Recently, the evolution of mobile network technologies brought a possible solution to the inability of most mobile devices in dealing with interactive applications and a high degree of realism. The solution is based on the remote processing of applications on servers with high processing capacity, and delivery of the images produced for mobile devices made by its conversion into video and streaming broadcasts. This master dissertation explores a remote processing solution for interacting with architectural environments through tablets. Emphasis was placed in: keeping high degree of realism, difficult to achieve due to the processing requirements in real-time interactive applications; the adaptation and combination of tools available in the market; and in mitigating latency through different ways of user interaction. The evaluation of the results was carried out in training sessions, interaction and interviews with volunteers.
A evolução da Computação Gráfica apoiada no desenvolvimento computacional possibilita criações visuais cada vez mais avançadas promovendo benefícios a diversas áreas como: Artes, Arquitetura, Design de produto, Design visual, Jogos, Cinema, Engenharia, Geoprocessamento e Medicina. Porém, a capacidade de processamento exigida por aplicações que requerem elevado grau de realismo restringe o uso dessas aplicações a uma parcela pequena de toda a gama de dispositivos computacionais instalados e de uso da população. Recentemente, a evolução das tecnologias de rede móvel trouxe uma possível solução à incapacidade da maioria dos dispositivos móveis em lidar com aplicações interativas e de elevado grau de realismo. A solução baseia-se no processamento remoto das aplicações, em servidores com elevada capacidade de processamento, e na entrega das imagens produzidas para os dispositivos móveis feita através de sua conversão em vídeo e transmissão por streaming. Este trabalho explora uma solução de processamento remoto para interação com ambientes arquitetônicos através de tablets. Deu-se ênfase às questões de manutenção de elevado grau de realismo, dificultada pelas exigências de processamento em tempo real de aplicações interativas, à adaptação e combinação de ferramental disponível no mercado e à mitigação da latência através de recursos de interface com usuário. A avaliação dos resultados foi realizada em sessões de treinamento, interação e entrevistas com usuários voluntários.
APA, Harvard, Vancouver, ISO, and other styles
43

Kimer, Tomáš. "Benchmark pro zařízení s podporou OpenGL ES 3.0." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236024.

Full text
Abstract:
This thesis deals with the development of benchmark application for the OpenGL ES 3.0 devices using the realistic real-time rendering of 3D scenes. The first part covers the history and new features of the OpenGL ES 3.0 graphics library. Next part briefly describes selected algorithms for the realistic real-time rendering of 3D scenes which can be implemented using the new features of the discussed library. The design of benchmark application is covered next, including the design of online result database containing detailed device specifications. The last part covers implementation on Android and Windows platforms and the testing on mobile devices after publishing the application on Google Play. Finally, the results and possibilites of further development are discussed.
APA, Harvard, Vancouver, ISO, and other styles
44

Godoy, Arthur Pedro de. "Uma arquitetura para promover o uso de dispositivos com limitações computacionais na interação com mídias sintetizadas remotamente." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/569.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:06:14Z (GMT). No. of bitstreams: 1 6118.pdf: 3001954 bytes, checksum: 5643531722be67da01c9dfa9c79f4ecb (MD5) Previous issue date: 2014-05-22
Financiadora de Estudos e Projetos
Computer graphics and virtual reality technologies allow audiovisual experiences with high level of realism, as the ones achieved in video games and movies. The synthetization of media with high degree of audiovisual realism demands specialized systems with high performance computing capacity. The profusion in the last decade of personal computing consumer electronics such as mobile and portable devices, encourages the interest in making possible, also in these devices, applications with high level of visual realism. However, due to the intrinsic limitations of the computing capability of mobile and portable devices, regarding to the physical characteristics of these devices, such as dimensions, electrical consume and heat dissipation, it s not possible directly process media with the same level of visual realism that is found in specialized systems. This research suggests the exploration of a traditional solution: the remote interaction with these applications. Challenges resulting from this approach are identified and studied. Solutions are proposed, analyzed and formalized in a architecture for reference. As a proof of concept the architecture is used in the development of applications on two scenarios: one in a local network with a media characterized by its low tolerance in delay of interaction response and another with a high tolerance media through the Internet. Also is showed the flexibility of the proposed architecture by integrating one of the applications developed with a multimedia context.
As tecnologias de computação gráfica e realidade virtual permitem experiências visuais e auditivas com alto nível de realismo sensorial, como alcançado em jogos eletrônicos e no cinema. A sintetização de mídia com alto grau de realismo visual e auditivo demanda sistemas especializados com capacidade computacional de alto desempenho. A profusão na última década de eletrônicos de consumo computacionais pessoais, como dispositivos móveis e portáteis, desperta o interesse em tornar possível, também nesses dispositivos, aplicações visuais com alto nível de realismo. Devido às limitações da capacidade computacional intrínsecas dos dispositivos móveis, decorrentes de suas características físicas, como dimensões, restrições de consumo e dissipação térmica, não é possível a execução direta de mídia interativa com o mesmo grau de realismo encontrado em sistemas especializados. Este trabalho sugere a exploração de uma solução tradicional: o processamento e a interação remota com esse tipo de mídia. Desafios decorrentes dessa solução são identificados e estudados. Soluções são propostas, analisadas e formalizadas em uma arquitetura de referência. Como prova de conceito da arquitetura explorou-se o desenvolvimento de aplicações em dois cenários: um em rede local com mídia de baixa tolerância a atrasos no tempo de resposta às interações e outra através da Internet com mídia de maior tolerância a atrasos. Mostrouse também a flexibilidade da arquitetura desenvolvida com a integração do componente cliente a um sistema com múltiplas mídias, em que a mídia remota relaciona-se integralmente com o contexto multimídia.
APA, Harvard, Vancouver, ISO, and other styles
45

Kus, Hülya. "Long-term performance of water repellants on rendered autoclaved aerated concrete." Doctoral thesis, KTH, Civil and Architectural Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3407.

Full text
Abstract:

Many failures of external walls made of porous buildingmaterials are caused by excessive moisture content,particularly after driving rain and under long duration ofmoist conditions. Lack of sufficient protection againstexposure conditions is one of the reasons for external wallsprematurely demonstrating failures, i.e. properties andperformance above/below critical levels. Silicon-based waterrepellants are increasingly used in order to improve theperformance of both old and new buildings. Water repellants areexpected to prolong the service life and improve the durabilityof wall components by preventing or minimising water ingressinto the structure and thus delaying the deteriorating effectsof the atmosphere. To date, various kinds of water repellantshave been developed. However, only limited research has beencarried out, particularly on the long-term field exposuretesting. Existing research is mainly focused on the performanceof surface treatments of concrete structures and the protectionof historical buildings built of stone, brick and wood, and isprimarily based on short-term laboratory testing. The aim ofthis research work is to study the long-term performance,degradation processes and ageing characteristics of renderedautoclaved aerated concrete (AAC) with and without waterrepellants. Investigations are carried out by physical andchemical analysis of fresh samples, samples naturally weatheredby long-term field exposure and samples artificially aged byshort-term accelerated laboratory tests. Two differentapplication of water repellants are employed: impregnation ofrendering surface with an aqueous product and as additive inpowder form mixed into the fresh rendering mortar. Continuousmoisture and temperature monitoring of naturally exposed testsamples are also included in the study. Wetcorr sensors andresistance-type nail electrodes are used to measure the surfacemoisture and the moisture content in the material,respectively. This thesis describes the experimental set-upand presents the results from site monitoring and laboratorytests of unexposed, naturally and artificially exposed samples(freeze-thaw and UV+water). The results from the continuousmoisture measurements are compared with the results obtainedfrom the full-scale test cabin built within the EUREKA-projectE 2116 DurAAC. The test cabin has the same basic measurementinstruments for continuous monitoring of moisture andtemperature. An attempt has been made to develop methods forlong-term performance assessment of water repellants to be usedin service life prediction. The combination of data obtainedfrom the field measurements with data obtained from thelaboratory tests and analysis may also meet practical needs ofthe end-users.

APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Peng-Chih, and 王鵬誌. "High-Performance Volume Rendering Using MPI / OpenMP." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/ybre93.

Full text
Abstract:
碩士
東吳大學
資訊科學系
93
Volume Rendering needs to waste a great deal of calculation time and memory, how overcome these two greatest problems to accelerate to design is one big challenge currently. To solve the problem of this kind of huge data set, the method that often make use of most would be: The strength that gathers a group of work stations is cooperated to complete the calculation work. So must lead first can parallel and calculating algorithm, then make use of the parallelism of the industry standard to turn the technique to reach to design immediately. The hybrid MPI / OpenMP mode can provide to gather the environment most high-efficiency parallelism to turn the strategy to the SMP cluster, also allowing in lone SMP environment, making use of two kinds of characteristics of different standards to reach the most high-performance, this is currently in SMP cluster gathering environment a newly arisen program design trend. Shear-warp factorization is applied currently most extensively and efficiency high algorithm, have already canned carry out the rather high appearance renewal rate; It a characteristic points out: The voxel scan the line and will scan the line method to arrange with the pixel in cutting the physical volume in the image, this represents the physical volume and the image data structure and can browse by scanning the line sequence meantime. According to this kind of characteristic, taking scanning the line as the basal consistency data structure iii can be used directly then, for instance, the continuous length codes to indicate the method (the run-length encoded representation). This kind of data structure includes two kinds of runses of types: Transparent and opaque, it allows us to jump the transparent vox quickly during the period of design, only need the physical volume of aiming at the opacity part to carry on the processing then. Aim at RLE this part, this research thesis puts forward another a set of RLE algorithm that improvement leads, can code the less runs with the economical memory space, and design time and also can get to increase fewly soon.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Chun-Sheng, and 李昀陞. "An Analysis of Web Image Rendering Performance." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/euzhhp.

Full text
Abstract:
碩士
國立臺灣海洋大學
資訊工程學系
104
Most current Internet services are delivered via mobile web sites. Therefore, the performance of web site loading and rendering has become the first quality impression for users. While existing research works focus on the performance of network transmissions and script running time, we believe that improper use and embedding of images could be a key factor for web rendering performance. It could be even more critical for web sites adopting responsive designs. In this thesis, we analyze the performance of image rendering in browsers and discuss possible approaches to improve the performance. We first design and implement experiments to collect and measure how images are used and embedded in web sites. We then measure the performance overhead of imaging processing on main stream operating systems and devices. Our measurement results show that more than 40% embedded web images have to be scaled before it can be rendered. With improper implementation combinations, it may spend up to 0.25s to scale a single image. Last, based on the pixel ratio property available on most mobile devices, we propose an effective approach to improve image processing performance without degrading the overall user-perceived image qualities.
APA, Harvard, Vancouver, ISO, and other styles
48

Che-WeiChang and 張哲惟. "Enhancing rendering performance for Next-Generation Sequencing data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/6wq4df.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
104
Next generation sequencing (NGS) technologies have brought an unprecedented scale of genomic data to biologists. NGS technologies, unlike array-based profiling technologies, can provides the read count variation at the base level in a transcript rather than a single expression value. Such base-level read coverage has been used in many research field. Calculating the base-level read coverage requires alignment of numerous reads on reference genome, but the result is difficult to read and analysis by human. Many viewers, such as Savant, Tablet and Integrative Genomics Viewer, has been developed to solve this problem, converting those read alignment into friendly graphic profile. However, to our best knowledge, none of them can timely visualize a genome-wide base-level read coverage in an interactive environment. This study proposes an efficient visualization pipeline for NGS data and implements a lightweight read coverage viewer with the proposed pipeline. The proposed viewer provides friendly user interface and can rapidly response to user’s input.
APA, Harvard, Vancouver, ISO, and other styles
49

(8803037), Xiao Lei. "Real-time Rendering with Heterogeneous GPUs." Thesis, 2020.

Find full text
Abstract:
Over the years, the performance demand for graphics applications has been steadily increasing. While upgrading the hardware is one direct solution, the emergence of the new low-level and low-overhead graphics APIs like Vulkan also exposed the possibility of improving rendering performance from the bottom of software implementation.

Most of the recent years’ middle- to high-end personal computers are equipped with both integrated and discrete GPUs. However, with previous graphics APIs, it is hard to put these two heterogeneous GPUs to work concurrently in the same application without tailored driver support.

This thesis provides an exploration into the utilization of such heterogeneous GPUs in real-time rendering with the help of Vulkan API. This paper first demonstrates the design and implementation details for the proposed heterogeneous GPUs working model. After that, the paper presents the test of two workload offloading strategies: offloading screen space output workload to the integrated GPU and offloading asynchronous computation workload to the integrated GPU.

While this study failed to obtain performance improvement through offloading screen space output workload, it is successful in validating that offloading asynchronous computation workload from the discrete GPU to the integrated GPU can improve the overall system performance. This study proves that it is possible to make use of the integrated and discrete GPUs concurrently in the same application with the help of Vulkan. And offloading asynchronous computation workload from the discrete GPU to the integrated GPU can provide up to 3-4% performance improvement with combinations like UHD Graphics 630 + RTX 2070 Max-Q and HD Graphics 630 + GTX 1050.
APA, Harvard, Vancouver, ISO, and other styles
50

Van, Rooyen Robert Martinez. "Human-informed robotic percussion renderings: acquisition, analysis, and rendering of percussion performances using stochastic models and robotics." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10433.

Full text
Abstract:
A percussion performance by a skilled musician will often extend beyond a written score in terms of expressiveness. This assertion is clearly evident when comparing a human performance with one that has been rendered by some form of automaton that expressly follows a transcription. Although music notation enforces a significant set of constraints, it is the responsibility of the performer to interpret the piece and “bring it to life” in the context of the composition, style, and perhaps with a historical perspective. In this sense, the sheet music serves as a general guideline upon which to build a credible performance that can carry with it a myriad of subtle nuances. Variations in such attributes as timing, dynamics, and timbre all contribute to the quality of the performance that will make it unique within a population of musicians. The ultimate goal of this research is to gain a greater understanding of these subtle nuances, while simultaneously developing a set of stochastic motion models that can similarly approximate minute variations in multiple dimensions on a purpose-built robot. Live or recorded motion data, and algorithmic models will drive an articulated robust multi-axis mechatronic system that can render a unique and audibly pleasing performance that is comparable to its human counterpart using the same percussion instruments. By utilizing a non-invasive and flexible design, the robot can use any type of drum along with different types of striking implements to achieve an acoustic richness that would be hard if not impossible to capture by sampling or sound synthesis. The flow of this thesis will follow the course of this research by introducing high-level topics and providing an overview of related work. Next, a systematic method for gesture acquisition of a set of well-defined percussion scores will be introduced, followed by an analysis that will be used to derive a set of requirements for motion control and its associated electromechanical subsystems. A detailed multidiscipline engineering effort will be described that culminates in a robotic platform design within which the stochastic motion models can be utilized. An analysis will be performed to evaluate the characteristics of the robotic renderings when compared to human reference performances. Finally, this thesis will conclude by highlighting a set of contributions as well as topics that can be pursued in the future to advance percussion robotics.
Graduate
2019-12-10
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography