Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Real-time ray tracing.

Dissertationen zum Thema „Real-time ray tracing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-22 Dissertationen für die Forschung zum Thema "Real-time ray tracing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Huss, Niklas. „Real Time Ray Tracing“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-9207.

Der volle Inhalt der Quelle
Annotation:
Ray tracing has for a long time been used to create photo realistic images, but due to complex calculations done per pixel and slow hardware, the time to render a frame has been counted in hours or even days and this can be drawback if a change of a scene cannot be seen instantly. When ray tracing a frame takes less than a second to render we call it “real time ray tracing” or “interactive ray tracing” and many solutions have been developed and some involves distributing the computation to different computers interconnected in a very fast network (100 Mbit or higher). There are some drawbacks with this approach because most people do not have more than one computer and if they have, the computers are most likely not connected to each other. Since the hardware of today is fast enough to render a pretty complex image within minutes it should be possible to achieve real time ray tracing by combining many different methods that has been developed and reduce the render time. This work will examine what has to be sacrificed in image quality and complexity of static scenes, in order to achieve real time frame rate with ray tracing on a single computer. Some of the methods that will be covered in this work are frame optimizations, secondary rays optimization, hierarchies, culling, shadow caching, and sub sampling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Perales, Remigio. „Parallel ray tracing for real time animation“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36961.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Enfeldt, Viktor. „Real-Time Ray Tracing With Polarization Parameters“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19667.

Der volle Inhalt der Quelle
Annotation:
Background. The real-time renderers used in video games and similar graphics applications do not model the polarization aspect of light. Polarization parameters have previously been incorporated in some offline ray-traced renderers to simulate polarizing filters and various optical effects. As ray tracing is becoming more and more prevalent in real-time renderers, these polarization techniques could potentially be used to simulate polarization and its optical effects in real-time applications as well. Objectives. This thesis aims to determine if an existing polarization technique from offline renderers is, from a performance standpoint, viable to use in real-time ray-traced applications to simulate polarizing filters, or if further optimizations and simplifications would be needed. Methods. Three ray-traced renderers were implemented using the DirectX RayTracing API: one polarization-less Baseline version; one Polarization version using an existing polarization technique; and one optimized Hybrid version, which is a combination of the other two. Their performance was measured and compared in terms of frametimes and VRAM usage in three different scenes and with five different ray counts. Results. The Polarization renderer is ca. 30% slower than the Baseline in the two more complex scenes, and the Hybrid version is around 5–15% slower than the Baseline in all tested scenes. The VRAM usage of the Polarization version was higher than the Baseline one in the tests with higher ray counts, but only by negligible amounts. Conclusions.  The Hybrid version has the potential to be used in real-time applications where high frame rates are important, but not paramount (such as the commonly featured photo modes in video games). The performance impact of the Polarization renderer's implementation is greater, but it could potentially be used as well. Due to limitations in the measurement process and the scale of the test application, no conclusions could be made about the implementations' impact on VRAM usage.
Bakgrund. Realtidsrenderarna som används i videospel och liknande grafikapplikationer simulerar inte ljusets polarisering. Polariseringsinformation har tidigare implementerats i vissa stålföljningsbaserade (ray-traced) offline-renderare för att simulera polariseringsfilter och diverse optiska effekter. Eftersom strålföljning har blivit allt vanligare i realtidsrenderare så kan dessa polariseringstekniker potentiellt också användas för att simulera polarisering och dess optiska effekter i sådana program. Syfte. Syftet med denna rapport är att avgöra om en befintlig polariseringsteknik från offline-renderare, från en prestandasynpunkt, är lämplig att använda för att simulera polariseringsfilter i stålföljningsbaserade realtidsapplikationer, eller om ytterligare optimeringar och förenklingar behövs. Metod. DirectX RayTracing API:et har använts för att implementera tre stålföljningsbaserade realtidsrenderare: en polarisationsfri Baseline-version; en Polarization-version med en befintlig polariseringsteknik; och en optimerad Hybrid-version, som är en kombination av de andra två. Deras prestanda mättes och jämfördes med avseende på frametime och VRAM-användning i tre olika scener och med fem olika antal strålar per pixel. Resultat. Polarization-versionen är ca 30% långsammare än Baseline-versionen i de två mest komplexa scenerna, och Hybrid-versionen är ca 5–15% långsammare än Baseline-versionen i alla testade scener. Polarization-versionens VRAM-användningen var högre än Baseline-versions i testerna med högre strålantal, men endast med försumbara mängder. Slutsatser. Hybrid-versionen har potential att användas i realtidsapplikationer där höga bildhastigheter är viktiga, men inte absolut nödvändiga (exempelvis de vanligt förekommande fotolägena i videospel). Polarization-versionens implementation hade sämre prestanda, men även den skulle potentiellt kunna användas i sådana applikationer. På grund av mätprocessens begränsningar och testapplikationens omfattning så kunde inga slutsatser dras gällande implementeringarnas påverkan på VRAM-användning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Andersson, Filip Lars Roland. „Real-Time Ray Tracing on the Cell Processor“. Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-95303.

Der volle Inhalt der Quelle
Annotation:
The first ray casting algorithm was introduced as early as 1966 and was followed by the first ray tracing algorithm in 1979. Since then many revisions to both these algorithms have been presented along with the strong development of computer processors. For a very long time both ray casting and ray tracing were associated with rendering of single images. One single image could take several hours to compute and to date still can for very complex scenes. Only during the last few years have attempts to write algorithms for real time been made. This thesis focuses on the question of how a real time ray caster can be mapped on the Cell Broadband Engine Architecture. It addresses the development of a ray caster on a single unit processor and then goes through the steps on how to rewrite an application to exploit the full potential of the cell broadband engine. This includes identifying the compute intensive parts of the application and parallelizing these over all the available elements in the cell architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wrigley, Adrian Martin Thomas. „Real-time ray tracing on a novel HDTV framestore“. Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318420.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Erik, Liljeqvist. „Evaluating a CPU/GPU Implementation for Real-Time Ray Tracing“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35768.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Norgren, David. „Implementing and Evaluating CPU/GPU Real-Time Ray Tracing Solutions“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-32076.

Der volle Inhalt der Quelle
Annotation:
Ray tracing is a popular algorithm used to simulate the behavior of light and is commonly used to render images with high levels of visual realism. Modern multicore CPUs and many-core GPUs can take advantage of the parallel nature of ray tracing to accelerate the rendering process and produce new images in real-time. For non-specialized hardware however, such implementations are often limited to low screen resolutions, simple scene geometry and basic graphical effects. In this work, a C++ framework was created to investigate how the ray tracing algorithm can be implemented and accelerated on the CPU and GPU, respectively. The framework is capable of utilizing two third-party ray tracing libraries, Intel’s Embree and NVIDIA’s OptiX, to ray trace various 3D scenes. The framework also supports several effects for added realism, a user controlled camera and triangle meshes with different materials and textures. In addition, a hybrid ray tracing solution is explored, running both libraries simultaneously to render subsections of the screen. Benchmarks performed on a high-end CPU and GPU are finally presented for various scenes and effects. Throughout these results, OptiX on a Titan X performed better by a factor of 2-4 compared to Embree running on an 8-core hyperthreaded CPU within the same price range. Due to this imbalance of the CPU and GPU along with possible interferences between the libraries, the hybrid solution did not give a significant speedup, but created possibilities for future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Poulsen, Henrik. „Potential of GPU Based Hybrid Ray Tracing For Real-Time Games“. Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3488.

Der volle Inhalt der Quelle
Annotation:
The development of Graphics Hardware Technology is blazing fast, with new and more improved models, that out spec the previous generations with leaps and bounds, before one has the time to digest the potential of the previous generations computing power. With the progression of this technology the computer games industry has always been quick to adapt this new power and all the features that emerge as the graphic card industry learn what the customers need from their products. The current generations of games use extraordinary visual effects to heighten the immersion into the games, all of which is thanks to the constant progress of the graphics hardware, which would have been an impossibility just a couple of years ago. Ray tracing has been used for years in the movie industry for creation of stunning special effects and whole movies completely made in 3D. This technique for giving realistic imagery has always been for usage exclusively for non-interactive entertainment, since this way of rendering an image is extremely expensive when it comes to computations. To generate one single image with Ray Tracing you might need several hundred millions of calculations, which so far haven’t been proven to work in real-time situations, such as for games. However, due to the continuous increase of processing power in Graphical Processing Units, GPUs, the limits of what can, and cannot, be done in real-time is constantly shifting further and further into the realm of possibility. So this thesis focuses upon finding out just how close we are to getting ray tracing into the realm of real-time games. Two tests were performed to find out the potential a current (2009) high-end computer system has when it comes to handling a raster - ray tracing hybrid implementation. The first test is to see how well a modern GPU handles rendering of a very simple scene with phong shading and ray traced shadows without any optimizations. And the second test is with the same scenario, but this time done with a basic optimization; this last test is to illustrate the impact that possible optimizations have on ray tracers. These tests were later compared to Intel’s results with ray tracing Enemy Territory: Quake Wars.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Urra, Rodrigo A. „Scalable ray tracing with multiple GPGPUs /“. Online version of thesis, 2009. http://hdl.handle.net/1850/8705.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Säll, Martin, und Fredrik Cronqvist. „Real-time generation of kd-trees for ray tracing using DirectX 11“. Thesis, Blekinge Tekniska Högskola, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15321.

Der volle Inhalt der Quelle
Annotation:
Context. Ray tracing has always been a simple but effective way to create a photorealistic scene but at a greater cost when expanding the scene. Recent improvements in GPU and CPU hardware have made ray tracing faster, making more complex scenes possible with the same amount of time needed to process the scene. Despite the improvements in hardware ray tracing is still rarely run at a interactive speed. Objectives. The aim of this experiment was to implement a new kdtree generation algorithm using DirectX 11 compute shaders. Methods. The implementation created during the experiment was tested using two platforms and five scenarios where the generation time for the kd-tree was measured in milliseconds. The results where compared to a sequential implementation running on the CPU. Results. In the end the kd-tree generation algorithm implemented did not run within our definition of real-time. Comparing the generation times from the implementations shows that there is a speedup for the GPU implementation compared to our CPU implementation, it also shows linear scaling for the generation time as the number of triangles in the scene increase. Conclusions. Noticeable limitations encountered during the experiment was that the handling of dynamic structures and sorting of arrays are limited which forced us to use less memory efficient solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Frid, Kastrati Mattias. „Hybrid Ray-Traced Reflections in Real-Time : in OpenGL 4.3“. Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10427.

Der volle Inhalt der Quelle
Annotation:
Context. Reaching photo realistic results when rendering 3D graphics in real-time is a hard computational task. Ray-tracing gives results close to this but is too expensive to be run at real-time frame rates. On the other hand rasterized methods such as deferred rendering are able to keep the tight time constraints with the support of modern hardware. Objectives. The basic objective is to merge deferred rendering and ray-tracing into one rasterized pipeline for dynamic scenes. In the thesis the proposed method is explained and compared to the methods it merges. Image quality, execution time and VRAM usage impact are investigated. Methods. The proposed method uses deferred rendering to render the result of the primary rays. Some pixels are marked, based on material properties for further rendering with ray-tracing. Only reflections are presented in the thesis but it has been proven that other global illumination effects can be implemented in the ray-tracing framework used. Results and Conclusions. The hybrid method is proved through experiments to be between 2.49 to 4.19 times faster than pure ray-tracing in the proposed pipeline. For smaller scenes it can be run at frame rates close to real-time, but, for larger scenes such as the Crytek Sponza scene the real-time feeling is lost. However, interactivity is never lost. It is also proved that a simple adjustment to the original framework can save almost 2/3 of the memory spent on A-buffers. Image comparisons prove that the technique can compete with offline ray tracers in terms of image quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Souza, Elleis C. „An Analysis of Real-Time Ray Tracing Techniques Using the Vulkan® Explicit API“. DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2320.

Der volle Inhalt der Quelle
Annotation:
In computer graphics applications, the choice and implementation of a rendering technique is crucial when targeting real-time performance. Traditionally, rasterization-based approaches have dominated the real-time sector. Other algorithms were simply too slow to compete on consumer graphics hardware. With the addition of hardware support for ray-intersection calculations on modern GPUs, hybrid ray tracing/rasterization and purely ray tracing approaches have become possible in real-time as well. Industry real-time graphics applications, namely games, have been exploring these different rendering techniques with great levels of success. The addition of ray tracing into the graphics developer’s toolkit has without a doubt increased what level of graphical fidelity is achievable in real-time. In this thesis, three rendering techniques are implemented in a custom rendering engine built on the Vulkan® Explicit API. Each technique represents a different family of modern real-time rendering algorithms. A largely rasterization-based method, a hybrid ray tracing/rasterization method, and a method solely using ray tracing. Both the hybrid and ray tracing exclusive approach rely on the ReSTIR algorithm for lighting calculations. Analysis of the performance and render quality of these approaches reveals the trade-offs incurred by each approach, alongside the performance viability of each in a real-time setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Dahlin, Alexander. „Improving Ray Tracing Performance with Variable Rate Shading“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21945.

Der volle Inhalt der Quelle
Annotation:
Background. Hardware-accelerated ray tracing has enabled ray traced reflections for real-time applications such as games. However, the number of rays traced each frame must be kept low to achieve expected frame rates. Therefore, techniques such as rendering the reflections at quarter resolution are used to limit the number of rays traced each frame. The new hardware features inline ray tracing, and hardware variable rate shading (VRS) could be combined to limit the rays even further. Objectives. The first goal is to use hardware VRS to limit the number of rays even further than rendering the reflections at quarter resolution, while maintaining the visual quality in the final rendered image. The second goal is to determine if inline ray tracing provides better performance than using ray generation shaders. Methods. Experiments are performed on a ray traced reflections pipeline using different techniques to generate rays. The techniques use inline ray tracing, inline ray tracing combined with VRS, and ray generation shaders. These are compared and evaluated using performance tests and the image evaluator \FLIP. Results. The results show that limiting the number of rays with hardware VRS result in a performance increase. The difference in visual quality between using inline ray tracing with VRS and previous techniques remain comparable. The performance tests show that inline ray tracing performs worse than ray generation shaders with increased scene complexity. Conclusions. The conclusion is that hardware VRS can be used to limit the number of rays and achieve better performance while visual quality remain comparable to previous techniques. Inline ray tracing does not perform better than ray generation shaders for workloads similar to ray traced reflections.
Bakgrund. Hårdvaruaccelererad strålspårning har möjliggjort strålspårade reflektioner för realtidsapplikationer såsom spel. Däremot måste antalet strålar som spåras hållas lågt för att förväntade bildfrekvenser ska uppnås. Därför används renderings-tekniker som att rendera reflektioner i en fjärdedels upplösning för att begränsa mängden strålar. De nya teknikerna inline strålspårning och hårdvarubaserad variable rate shading (VRS) kan användas för att minska antalet strålar ytterliggare. Syfte. Det första målet är att använda hårdvarubaserad VRS för att minska antalet strålar ytterligare jämfört med att rendera reflektioner i en fjärdedels upplösning, men samtidigt upprätthålla den visuella kvalitén. Det andra målet är att avgöra om inline strålspårning ger bättre prestanda än att använda strålgenererings shaders för reflektioner. Metod. För att svara på forskningsfrågorna ufördes experiment på en strålspårad reflektionspipeline med olika tekniker för att generera strålar. Teknikerna som testas är inline strålspårning, inline strålspårning kombinerat med VRS, samt strålgenererings shaders. Dessa jämförs och evalueras med prestandatester och bild-evaulatorn \FLIP. Resultat. Resultaten visar att minska mängden strålar med hårdvarubaserad VRS resulterar i en prestandaökning. Skillnaden i visuell kvalité mellan inline strålspårning kombinerat med VRS och tidigare tekniker är jämförbara. Prestandatesterna visar att inline strålspårning presterar värre än strålgenererings shaders vid ökad scenkomplexitet. Slutsatser. Slutsatsen är att hårdvarubaserad VRS kan användas för att minska antalet strålar och resultera i bättre prestanda, medan den visuella kvalitén är jämförbar med tidigare tekniker. Inline strålspårning ger inte bättre prestanda än strålgenererings shaders vid strålspårade reflektioner och liknande arbetsbelastning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Waldner, Fabian. „Real-time Ray Traced Ambient Occlusion and Animation : Image quality and performance of hardware- accelerated ray traced ambient occlusion“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298085.

Der volle Inhalt der Quelle
Annotation:
Recently, new hardware capabilities in GPUs has opened the possibility of ray tracing in real-time at interactive framerates. These new capabilities can be used for a range of ray tracing techniques - the focus of this thesis is on ray traced ambient occlusion (RTAO). This thesis evaluates real-time ray RTAO by comparing it with ground- truth ambient occlusion (GTAO), a state-of-the-art screen space ambient occlusion (SSAO) method. A contribution by this thesis is that the evaluation is made in scenarios that includes animated objects, both rigid-body animations and skinning animations. This approach has some advantages: it can emphasise visual artefacts that arise due to objects moving and animating. Furthermore, it makes the performance tests better approximate real-world applications such as video games and interactive visualisations. This is particularly true for RTAO, which gets more expensive as the number of objects in a scene increases and have additional costs from managing the ray tracing acceleration structures. The ambient occlusion methods are evaluated in terms of image quality and performance. Image quality is assessed using structural similarity index (SSIM) and through visual inspection. The performance is assessed by measuring computation time, in milliseconds. This thesis shows that the image quality of RTAOis a substantial improvement over GTAO, being close to offline rendering quality. The primary visual issue with RTAO is visible noise - especially noticeable around the contours of moving objects. Nevertheless, GTAO is very competitive due to its performance, the computation time for all GTAO tests were below one ms per frame. At 1080p full-resolution GTAO was computed in 0.3883 ms on a RTX 3070 GPU. In contrast, the computation time of RTAO at 1080p and two samples per pixels were 2.253 ms. The cost of updating and rebuilding ray tracing acceleration structures were also noteworthy. Overall, the results indicate that hardware accelerated ray tracing can be used for significant improvements in image quality but adoption of this technique is not trivial due to performance concerns.
Med hårdvaruaccelererad strålspårning på grafikkort som introducerades nyligen möjliggjordes flera strålspårningsbaserade tekniker för rendering i realtid. Detta examensarbete undersöker en sådan teknik - strålspårad ambient ocklusion (engelska: ray traced ambient occlusion (RTAO)). RTAO undersöks och utvärderas för användning i realtidsapplikationer genom en jämförelse med en ambient-ocklusionsmetod som beräknas i bildrummet (screen space ambient occlusion (SSAO)) kallad ground-truth ambient occlusion (GTAO). Detta examensarbete bidrar genom att utvärdera metoderna i testscenarion som inkluderar animerade objekt. Detta medför ett antal fördelar: utvärderingen kan betona visuella artefakter som kan uppstå när objekt rör sig och animeras. Vidare gör det att prestandatesterna kan inkludera kostnader som tillkommer när scener innehåller animerade objekt - detta är särskilt betydande för RTAO som blir dyrare att beräkna när antalet objekt stiger samt har ytterligare kostnader för att uppdatera datastrukturer som används för att accelerera strålspårningen. På så vis närmar sig testscenarion en bred kategori av applikationer som använder rendering i realtid, exempelvis spel och interaktiva visualiseringar. Utvärderingen sker på uppnådd bildkvalité samt metodernas prestanda. Bildkvalitén utvärderas genom structural similarity index (SSIM) samt visuellinspektion. Prestandan utvärderas genom att mäta beräkningstiden i millisekunder. Resultaten visar att RTAOs bildkvalité är tydligt överlägsen GTAO och närmar sig de resultat som uppnås genom förrendering. Det primära problemet med RTAOs bildkvalité är förekomsten av visuellt brus. Detta är extra tydligt runt konturerna på de objekt som är animerade och förflyttar sig. Hursomhelst är GTAO attraktivt då denna metod kan beräknas betydligt snabbare än RTAO. Samtliga GTAOs prestandatester visade på beräkningstider under en millisekund. Vid en upplösning på 1080p med två prov per pixel (samples per pixel) var beräkningstiden för RTAO 2,253 ms. Kostnaden för att uppdatera datastrukturerna för strålspårningen visade sig också vara betydlig i många tester. Sammantaget indikerar resultaten att hårdvaruaccelererad strålspårning kan resultera i en signifikant förbättring av bildkvalité men att det kan innebära en kostnad som kräver betänklighet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Hansson, Söderlund Herman. „Hardware-Accelerated Ray Tracing of Implicit Surfaces : A study of real-time editing and rendering of implicit surfaces“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21764.

Der volle Inhalt der Quelle
Annotation:
Background. Rasterization of triangle geometry has been the dominating rendering technique in the real-time rendering industry for many years. However, triangles are not always easy to work with for content creators. With the introduction of hardware-accelerated ray tracing, rasterization-based lighting techniques have been steadily replaced by ray tracing techniques. This shift may signify the opportunity of exploring other, more easily manipulated, geometry-type alternatives compared to triangle geometry. One such geometry type is implicit surfaces. Objectives. This thesis investigates the rendering speed, editing speed, and image quality of different implicit surface rendering techniques using a state-of-the-art, hardware-accelerated, path tracing implementation. Furthermore, it investigates how implicit surfaces may be edited in real time and how editing affects rendering. Methods. A baseline direct sphere tracing algorithm is implemented to render implicit surfaces. Additionally, dense and narrow band discretization algorithms that sphere trace a discretization of the implicit surface are implemented. For each technique, two variations that provide potential benefits in rendering speed are also tested. Additionally, a real-time implicit surface editor that can utilize all the mentioned rendering techniques is created. Rendering speed, editing speed, and image quality metrics are captured for all techniques using different scenes created with the editor and an existing hardware-accelerated path tracing solution. Image quality differences are measured using mean squared error and the image difference evaluator FLIP. Results. Direct sphere tracing achieves the best image quality results but has the slowest rendering speed. Dense discretization achieves the best rendering speed in most tests and achieves better image quality results compared to narrow band discretization. Narrow band discretization achieves significantly better editing speed than both direct sphere tracing and dense discretization. All variations of each algorithm achieve better or equal rendering and editing speed compared to their standard implementation. All algorithms achieve real-time rendering and editing performance. However, only discretized methods display real-time rendering performance for all scenes, and only narrow band discretization displays real-time editing performance for a larger number of primitives. Conclusions. Implicit surfaces can be rendered and edited in real time while using a state-of-the-art, hardware-accelerated, path tracing algorithm. Direct sphere tracing degrades in performance when the implicit surface has an increased number of primitives, whereas discretization techniques perform independently of this. Furthermore, narrow band discretization is fast enough so that editing can be performed in real time even for implicit surfaces with a large number of primitives, which is not the case for direct sphere tracing or dense discretization.
Bakgrund. Triangelrastrering har varit den dominerande renderingstekniken inom realtidsgrafik i flera år. Trianglar är dock inte alltid lätta att jobba med för skapare av grafiska modeller. Med introduktionen av hårdvaruaccelererad strålspårning har rastreringsbaserade ljussättningstekniker stadigt ersatts av strålspårningstekniker. Detta skifte innebär att det kan finnas möjlighet för att utforska andra, mer lättredigerade geometrityper jämfört med triangelgeometri, exempelvis implicita ytor. Syfte. Detta examensarbete undersöker rendering- och redigeringshastigheten, samt bildkvaliteten av olika renderingstekniker för implicita ytor tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Den undersöker även hur implicita ytor kan redigeras i realtid och hur det påverkar rendering. Metod. En direkt sfärspårningsalgoritm implementeras som baslinje för att rendera implicita ytor. Även algoritmer som utför sfärstrålning över en kompakt- och smalbandsdiskretisering av den implicita ytan implementeras. För varje teknik implementeras även två variationer som potentiellt kan ge bättre prestanda. Utöver dessa renderingstekniker skapas även ett redigeringsverktyg för implicita ytor. Renderingshastighet, redigeringshastighet, och bildkvalité mäts för alla tekniker över flera olika scener som har skapats med redigeringsverktyget tillsammans med en hårdvaruaccelererad strålföljningsalgoritm. Skillnader i bildkvalité utvärderas med hjälp av mean squared error och evalueringsverktyget för bildskillnader som heter FLIP. Resultat. Direkt sfärspårning åstadkommer bäst bildkvalité, men har den långsammaste renderingshastigheten. Kompakt diskretisering renderar snabbast i de flesta tester och åstadkommer bättre bildkvalité än vad smalbandsdiskretisering gör. Smalbandsdiskretisering åstadkommer betydligt bättre redigeringshastighet än både direkt sfärspårning och kompakt diskretisering. Variationerna för respektive algoritm presterar alla lika bra eller bättre än standardvarianten för respektive algoritm. Alla algoritmer uppnår realtidsprestanda inom rendering och redigering. Endast diskretiseringsmetoderna uppnår dock realtidsprestanda för rendering med alla scener och endast smalbandsdiskretisering uppnår realtidsprestanda för redigering med ett större antal primitiver. Slutsatser. Implicita ytor kan renderas och redigeras i realtid tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Vid användning av direkt sfärstrålning minskar renderingshastigheten när den ytan består av ett stort antal primitiver. Diskretiseringstekniker har dock en renderingshastighet som är oberoende av antalet primitiver. Smalbandsdiskretisering är tillräckligt snabb för att redigering ska kunna ske i realtid även för implicita ytor som består stora antal primitiver.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Persson, Gustav, und Jonathan Udd. „Ray Tracing on GPU : Performance comparison between the CPU and the Compute Shader with DirectX 11“. Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2514.

Der volle Inhalt der Quelle
Annotation:
The game industry have always looked for rendering techniques that makes the games as good looking and realistic as possible. The common approach is to use triangles built up by vertices and apply many different techniques to make it look as good as possible. When triangles are used to draw objects, there is always edges and those edges often make the objects look less realistic than desired. To reduce these visible edges the amount of triangles for an object have to be increased, but with more triangles more processing power from the graphics cards is needed. Another way to approach rendering is ray tracing which can render an extremely photo realistic image but to the cost of unbearable low performance if you would use it in a realtime application. The reason raytracing is so slow is the massive amount of calculations that needs to be made. In DirectX 11 a few new shaders where announced and one of them were the compute shader, the compute shader allows you to calculate data on the graphics card which is not bound to the pipeline. The compute shader allows you to use the hundreds of cores that the graphic card has and is therefore well suited for a raytracing algorithm. One application is used to see if the hypothesis is correct. A flag is used to define if the application runs on the CPU and the GPU. The same algorithm is used in both versions. Three test where done on each processing unit to confirm the hypothesis. Three more tests where done on the GPU to see how the performance scaled on the GPU depending on the number of rendered objects. The tests proved throughout that the compute shader performs considerably better than the CPU when running our ray tracing algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Menšík, Jakub. „Zobrazování voxelových scén pomocí ray tracingu v reálném čase“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445579.

Der volle Inhalt der Quelle
Annotation:
The aim of this work was to create a program to visualize voxel scenes in real time using ray tracing. It included the study of various methods of such a rendering with a focus on shadows. The solution was created using Unity engine and experimental packages Unity Jobs and Burst. The thesis presents multiple ray tracing passes and SVGF technique, that is used to turn a noisy input into full edge-preserving image. The final program is able to render hard shadows, soft shadows, and ambient occlusion at speed of fifty frames per second.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Graglia, Florian. „Amélioration du photon mapping pour un scénario walkthrough dans un objectif de rendu physiquement réaliste en temps réel“. Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4072.

Der volle Inhalt der Quelle
Annotation:
L'un des objectifs lors du développement d'un produit industriel est d'obtenir un prototype numérique valide et réaliste. Cette thèse a pour objectif d'améliorer la qualité des simulations dans le contexte d'un processus de production. Ces processus impliquent souvent un rendu de type "walkthrough", avec une géométrie fixe mais un déplacement continu de l'observateur. Nous nous intéresserons donc plus précisément aux méthodes de rendu physiquement réaliste de scènes complexes pour un scénario "walkthrough". Durant le rendu, l'utilisateur doit pouvoir mesurer précisément la radiance d'un point ou d'une zone donnée, ainsi que modifier en temps réel la puissance des sources lumineuses. Fondée sur la méthode du photon mapping, nos travaux montrent les modifications à apporter aux algorithmes afin d'améliorer à la fois la qualité des images et le temps de calcul du processus de rendu
One of the goals when developing the product is to immediately obtain a real and valid prototype. This thesis provide new rendering methods to increase the quality of the simulations during the upstream work of the production pipeline. The latter usually requires a walkthrough rendering. Thus, we focuses on the physically-based rendering methods of complex scenes in walkthrough. During the rendering, the end-users must be able to measure the illuminate rates and to interactively modify the power of the light source to test different lighting ambiances. Based on the original photon mapping method, our work shows how some modifications can decrease the calculation time and improve the quality of the resulting images according to this specific context
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Muvvala, Priyanka. „Feasibility of Troposphere Propagation Delay Modeling of GPS Signals using Three-Dimensional Weather Radar Reflectivity Returns“. Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1307041652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Loyet, Raphaël. „Dynamic sound rendering of complex environments“. Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995328.

Der volle Inhalt der Quelle
Annotation:
De nombreuses études ont été menées lors des vingt dernières années dans le domaine de l'auralisation.Elles consistent à rendre audible les résultats d'une simulation acoustique. Ces études se sont majoritairementfocalisées sur les algorithmes de propagation et la restitution du champ acoustique dans desenvironnements complexes. Actuellement, de nombreux travaux portent sur le rendu sonore en tempsréel.Cette thèse aborde la problématique du rendu sonore dynamique d'environnements complexes selonquatre axes : la propagation des ondes sonores, le traitement du signal, la perception spatiale du son etl'optimisation informatique. Dans le domaine de la propagation, une méthode permettant d'analyser lavariété des algorithmes présents dans la bibliographie est proposée. A partir de cette méthode d'analyse,deux algorithmes dédiés à la restitution en temps réel des champs spéculaires et diffus ont été extraits.Dans le domaine du traitement du signal, la restitution est réalisée à l'aide d'un algorithme optimisé despatialisation binaurale pour les chemins spéculaires les plus significatifs et un algorithme de convolutionsur carte graphique pour la restitution du champ diffus. Les chemins les plus significatifs sont extraitsgrace à un modèle perceptif basé sur le masquage temporel et spatial des contributions spéculaires.Finalement, l'implémentation de ces algorithmes sur des architectures parallèles récentes en prenant encompte les nouvelles architectures multi-coeurs et les nouvelles cartes graphiques est présenté.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Zikmund, Tomáš. „Matematické metody pro zpracování obrazu v biologických pozorováních“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-234208.

Der volle Inhalt der Quelle
Annotation:
The dissertation deals with the image processing in digital holographic microscopy and X-ray computed tomography. The focus of the work lies in the proposal of data processing techniques to meet the needs of the biological experiments. Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The phase images are affected by the phase aberrations that make the analysis particularly difficult. Here, we present a novel algorithm for dynamical processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. High resolution X-ray computed tomography is increasingly used technique for the study of the small rodent bones micro-structure. In this part of the work, the trabecular and cortical bone morphology is assessed in the distal half of rat femur. We developed new method for mapping the cortical position and dimensions from a central longitudinal axis with one degree angular resolution. This method was used to examine differences between experimental groups. The bone position in tomographic slices is aligned before the mapping using the propound standardization procedure. The activity of remodelling process of the long bone is studied on the system of cortical canals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hunt, Warren Andrew 1983. „Data structures and algorithms for real-time ray tracing at the University of Texas at Austin“. 2008. http://hdl.handle.net/2152/18055.

Der volle Inhalt der Quelle
Annotation:
Modern rendering systems require fast and efficient acceleration structures in order to compute visibility in real time. I present several novel data structures and algorithms for computing visibility with high performance. In particular, I present two algorithms for improving heuristic based acceleration structure build. These algorithms, when used in a demand driven way, have been shown to improve build performance by up to two orders of magnitude. Additionally, I introduce ray tracing in perspective transformed space. I demonstrate that ray tracing in this space can significantly improve visibility performance for near-common origin rays such as eye and shadow rays. I use these data structures and algorithms to support a key hypothesis of this dissertation: “There is no silver bullet for solving the visibility problem; many different acceleration structures will be required to achieve the highest performance.” Specialized acceleration structures provide significantly better performance than generic ones and building many specialized structures requires high performance build techniques. Additionally, I present an optimization-based taxonomy for classifying acceleration structures and algorithms in order to identify which optimizations provide the largest improvement in performance. This taxonomy also provides context for the algorithms I present. Finally, I present several novel cost metrics (and a correction to an existing cost metric) to improve visibility performance when using metric based acceleration structures.
text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie