Dissertations / Theses on the topic 'Rendering Algorithm'

To see the other types of publications on this topic, follow the link: Rendering Algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Rendering Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ewins, Jon Peter. "Algorithm design and 3D computer graphics rendering." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310664.

Full text
Abstract:
3D Computer graphics is becoming an almost ubiquitous part of the world in which we live. being present in art. entertainment. advertising. CAD. training and education. scientific visualisation and with the growth of the internet. in e-commerce and communication. This thesis encompasses two areas of study: The design of algorithms for high quality. real-time 3D computer graphics rendering hardware and the methodology and means for achieving this. When investigating new algorithms and their implementation in hardware. it is important to have a thorough understanding of their operation. both individually and in the context of an entire architecture. It is helpful to be able to model different algorithmic variations rapidly and experiment with them interchangeably. This thesis begins with a description of software based modelling techniques for the rapid investigation of algorithms for 3D computer graphics within the context of a C++ prototyping environment. Recent tremendous increases in the rendering performance of graphics hardware have been shadowed by corresponding advancements in the accuracy of the algorithms accelerated. Significantly. these improvements have led to a decline in tolerance towards rendering artefacts. Algorithms for the effective and efficient implementation of high quality texture filtering and edge antialiasing form the focus of the algorithm research described in this thesis. Alternative algorithms for real-time texture filtering are presented in terms of their computational cost and performance. culminating in the design of a low cost implementation for higher quality anisotropic texture filtering. Algorithms for edge antialiasing are reviewed. with the emphasis placed upon area sampling solutions. A modified A-buffer algorithm is presented that uses novel techniques to provide: efficient fragment storage; support for multiple intersecting transparent surfaces; and improved filtering quality through an extendable and weighted filter support from a single highly optimised lookup table.
APA, Harvard, Vancouver, ISO, and other styles
2

sun, weifeng. "WAVELETS IN REAL-TIME RENDERING." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4026.

Full text
Abstract:
Interactively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting environment. Meanwhile, pre-computation based soft shadow algorithms are proven effective under low-frequency lighting environment. Under the most practical yet popular all-frequency natural lighting environment, however, real-time rendering of dynamic scenes still remains a challenging problem. In this dissertation, we propose a systematic approach to render dynamic glossy objects under the general all-frequency lighting environment. In our framework, lighting integration is reduced to two rather basic mathematical operations, efficiently computing multi-function product and product integral. The main contribution of our work is a novel mathematical representation and analysis of multi-function product and product integral in the wavelet domain. We show that, multi-function product integral in the primal is equivalent to summation of the product of basis coefficients and integral coefficients. In the dissertation, we give a novel Generalized Haar Integral Coefficient Theorem. We also present a set of efficient algorithms to compute multi-function product and product integral. In the dissertation, we demonstrate practical applications of these algorithms in the interactive rendering of dynamic glossy objects under distant time-variant all-frequency environment lighting with arbitrary view conditions. At each vertex, the shading integral is formulated as the product integral of multiple operand functions. By approximating operand functions in the wavelet domain, we demonstrate rendering dynamic glossy scenes interactively, which is orders of magnitude faster than previous work. As an important enhancement to the popular Pre-computation Based Radiance Transfer (PRT) approach, we present a novel Just-in-time Radiance Transfer (JRT) technique, and demonstrate its application in real-time realistic rendering of dynamic all-frequency shadows under general lighting environment. Our work is a significant step towards real-time rendering of arbitrary scenes under general lighting environment. It is also of great importance to general numerical analysis and signal processing.
Ph.D.
Other
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
3

Xiong, Rebecca Wen Fei. "A stratified rendering algorithm for virtual walkthroughs of large environments." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40233.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 73-74).
by Rebecca Wen Fei Xiong.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
4

Srivastava, Mayank. "Implementation and Evaluation of a Multiple-Points Haptic Rendering Algorithm." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1181049930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ramswamy, Lakshmy. "PARZSweep a novel parallel algorithm for volume rendering of regular datasets /." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04012003-140443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stewart, Nigel Timothy, and nigels@nigels com. "An Image-Space Algorithm for Hardware-Based Rendering of Constructive Solid Geometry." RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080721.144757.

Full text
Abstract:
A new approach to image-space hardware-based rendering of Constructive Solid Geometry (CSG) models is presented. The work is motivated by the evolving functionality and performance of computer graphics hardware. This work is also motivated by a specific industrial application --- interactive verification of five axis grinding machine tool programs. The goal is to minimise the amount of time required to render each frame in an animation or interactive application involving boolean combinations of three dimensional shapes. The Sequenced Convex Subtraction (SCS) algorithm utilises sequenced subtraction of convex objects for the purpose of interactive CSG rendering. Concave shapes must be decomposed into convex shapes for the purpose of rendering. The length of Permutation Embedding Sequences (PESs) used as subtraction sequences are shown to have a quadratic lower bound. In many situations shorter sequences can be used, in the best case linear. Approaches to s ubtraction sequence encoding are presented including the use of object-space overlap information. The implementation of the algorithm is experimentally shown to perform better on modern commodity graphics hardware than previously reported methods. This work also examines performance aspects of the SCS algorithm itself. Overall performance depends on hardware characteristics, the number and spatial arrangement of primitives, and the structure and boolean operators of the CSG tree.
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Michael. "Real-time rendering of large terrains using algorithms for continuous level of detail." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-638.

Full text
Abstract:

Three-dimensional computer graphics enjoys a wide range of applications of which games and movies are only few examples. By incorporating three-dimensional computer graphics in to a simulator the simulator is able to provide the operator with visual feedback during a simulation. Simulators come in many different flavors where flight and radar simulators are two types in which three-dimensional rendering of large terrains constitutes a central component.

Ericsson Microwave Systems (EMW) in Skövde is searching for an algorithm that (a) can handle terrain data that is larger than physical memory and (b) has an adjustable error metric that can be used to reduce terrain detail level if an increase in load on other critical parts of the system is observed. The aim of this paper is to identify and evaluate existing algorithms for terrain rendering in order to find those that meet EMW: s requirements. The objectives are to (i) perform a literature survey over existing algorithms, (ii) implement these algorithms and (iii) develop a test environment in which these algorithms can be evaluated form a performance perspective.

The literature survey revealed that the algorithm developed by Lindstrom and Pascucci (2001) is the only algorithm of those examined that succeeded to fulfill the requirements without modifications or extra software. This algorithm uses memory-mapped files to be able to handle terrain data larger that physical memory and focuses on how terrain data should be laid out on disk in order to minimize the number of page faults. Testing of this algorithm on specified test architecture show that the error metric used could be adjusted to effectively control the terrains level of detail leading to a substantial increase in performance. The results also reveal the need for both view frustum culling as well a level of detail algorithm to achieve fast display rates of large terrains. Further the results also show the importance of how terrain data is laid out on disk especially when physical memory is limited.

APA, Harvard, Vancouver, ISO, and other styles
8

Johnson, Jared. "Algorithms for Rendering Optimization." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5329.

Full text
Abstract:
This dissertation explores algorithms for rendering optimization realizable within a modern, complex rendering engine. The first part contains optimized rendering algorithms for ray tracing. Ray tracing algorithms typically provide properties of simplicity and robustness that are highly desirable in computer graphics. We offer several novel contributions to the problem of interactive ray tracing of complex lighting environments. We focus on the problem of maintaining interactivity as both geometric and lighting complexity grows without effecting the simplicity or robustness of ray tracing. First, we present a new algorithm called occlusion caching for accelerating the calculation of direct lighting from many light sources. We cache light visibility information sparsely across a scene. When rendering direct lighting for all pixels in a frame, we combine cached lighting information to determine whether or not shadow rays are needed. Since light visibility and scene location are highly correlated, our approach precludes the need for most shadow rays. Second, we present improvements to the irradiance caching algorithm. Here we demonstrate a new elliptical cache point spacing heuristic that reduces the number of cache points required by taking into account the direction of irradiance gradients. We also accelerate irradiance caching by efficiently and intuitively coupling it with occlusion caching. In the second part of this dissertation, we present optimizations to rendering algorithms for participating media. Specifically, we explore the implementation and use of photon beams as an efficient, intuitive artistic primitive. We detail our implementation of the photon beams algorithm into PhotoRealistic RenderMan (PRMan). We show how our implementation maintains the benefits of the industry standard Reyes rendering pipeline, with proper motion blur and depth of field. We detail an automatic photon beam generation algorithm, utilizing PRMan shadow maps. We accelerate the rendering of camera-facing photon beams by utilizing Gaussian quadrature for path integrals in place of ray marching. Our optimized implementation allows for incredible versatility and intuitiveness in artistic control of volumetric lighting effects. Finally, we demonstrate the usefulness of photon beams as artistic primitives by detailing their use in a feature-length animated film.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Haixin. "Fast volume rendering and deformation algorithms." [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB9590689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nimscheck, Uwe Michael. "Rendering for free form deformations." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tsingos, Nicolas. "MODELS AND ALGORITHMS FOR INTERACTIVE AUDIO RENDERING." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00629574.

Full text
Abstract:
Les systèmes de réalité virtuelle interactifs combinent des représentations visuelle, sonore et haptique, afin de simuler de manière immersive l'exploration d'un monde tridimensionnel représenté depuis le point de vue d'un observateur contrôlé en temps réel par l'utilisateur. La plupart des travaux effectués dans ce domaine ont historiquement port'e sur les aspects visuels (par exemple des méthodes d'affichage interactif de modèles 3D complexes ou de simulation réaliste et efficace de l'éclairage) et relativement peu de travaux ont été consacrés 'a la simulation de sources sonores virtuelles 'également dénommée auralisation. Il est pourtant certain que la simulation sonore est un facteur clé dans la production d'environnements de synthèse, la perception sonore s'ajoutant à la perception visuelle pour produire une interaction plus naturelle. En particulier, les effets sonores spatialisés, dont la direction de provenance est fidèlement reproduite aux oreilles de l'auditeur, sont particulièrement importants pour localiser les objets, séparer de multiples signaux sonores simultanés et donner des indices sur les caractéristiques spatiales de l'environnement (taille, matériaux, etc.). La plupart des systèmes de réalité virtuelle immersifs, des simulateurs les plus complexes aux jeux vidéo destin'es au grand public mettent aujourd'hui en œuvre des algorithmes de synthèse et spatialisation des sons qui permettent d'améliorer la navigation et d'accroître le réalisme et la sensation de présence de l'utilisateur dans l'environnement de synthèse. Comme la synthèse d'image dont elle est l'équivalent auditif, l'auralisation, appel'ee aussi rendu sonore, est un vaste sujet 'a la croisée de multiples disciplines : informatique, acoustique et 'électroacoustique, traitement du signal, musique, calcul géométrique mais également psycho-acoustique et perception audio-visuelle. Elle regroupe trois problématiques principales: synthèse et contrôle interactif de sons, simulation des effets de propagation du son dans l'environnement et enfin, perception et restitution spatiale aux oreilles de l'auditeur. Historiquement, ces trois problématiques émergent de travaux en acoustique architecturale, acoustique musicale et psycho-acoustique. Toutefois une différence fondamentale entre rendu sonore pour la réalité virtuelle et acoustique réside dans l'interaction multimodale et dans l'efficacité des algorithmes devant être mis en œuvre pour des applications interactives. Ces aspects importants contribuent 'a en faire un domaine 'a part qui prend une importance croissante, tant dans le milieu de l'acoustique que dans celui de la synthèse d'image/réalité virtuelle.
APA, Harvard, Vancouver, ISO, and other styles
12

Jean, Yves Darly. "Accelerated volume rendering via spatial/spectral analysis." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/12891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yu. "Haptic rendering algorithms for drilling into volume data." Thesis, University of East Anglia, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Law, Asish. "Exploiting coherency in parallel algorithms for volume rendering /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182323427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lian, Lili. "Haptic rendering of three-dimensional heterogeneous features." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38758209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lian, Lili, and 廉莉莉. "Haptic rendering of three-dimensional heterogeneous features." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38758209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Prete, Francesca. "Analisi di Algoritmi di Rendering basati sull'illuminazione globale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18168/.

Full text
Abstract:
La Computer Grafica è la disciplina che consiste nella creazione e manipolazione delle immagini con l'utilizzo del computer. È un argomento molto attuale poichè viene impiegato in molti campi, da quelli più intuitivi come i videogiochi a quelli riguardanti la progettazione di planimetrie di edifici e esami medici. La tesi è incentrata sui modelli di illuminazione globale che, a differenza dei modelli locali che analizzano gli oggetti della scena singolarmente, calcolano tutte le inter-riflessioni tra gli oggetti. I modelli di illuminazione globale, il ray tracing e il radiosity, si distinguono sia per il tipo di approccio utilizzato che per il campo di applicazione. Il ray tracing è di tipo view-dependent; ciò significa che il calcolo delle inter-riflessioni verrà fatto solamente per le direzioni di vista scelte. La sua costruzione prevede di far partire un raggio per ogni pixel della vista scelta, il quale colpirà un punto della scena. Dal punto colpito verranno fatti partire raggi secondari di riflessione, trasmissione e quelli diretti verso la fonte luminosa creando una struttura ad albero che comprenderà tutti i contributi luminosi del punto principale. Il radiosity è di tipo view-independent; ciò significa che il calcolo del radiosity verrà fatto una sola volta per tutta la scena, indipendentemente dalla vista.
APA, Harvard, Vancouver, ISO, and other styles
18

Rautenbach, Pierre. "An empirically derived system for high-speed shadow rendering." Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-06262009-141417/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Brandstetter, William E. "Multi-resolution deformation in out-of-core terrain rendering." abstract and full text PDF (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Whittle, Joss. "Quality assessment and variance reduction in Monte Carlo rendering algorithms." Thesis, Swansea University, 2018. https://cronfa.swan.ac.uk/Record/cronfa40271.

Full text
Abstract:
Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit. When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed. When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images. In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value.
APA, Harvard, Vancouver, ISO, and other styles
21

Anders, Söderholm, and Sörman Justus. "GPU-accelleration of image rendering and sorting algorithms with the OpenCL framework." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127479.

Full text
Abstract:
Today's computer systems often contains several different processing units aside from the CPU. Among these the GPU is a very common processing unit with an immense compute power that is available in almost all computer systems. How do we make use of this processing power that lies within our machines? One answer is the OpenCL framework that is designed for just this, to open up the possibilities of using all the different types of processing units in a computer system. This thesis will discuss the advantages and disadvantages of using the integrated GPU available in a basic workstation computer for computation of image processing and sorting algorithms. These tasks are computationally intensive and the authors will analyze if an integrated GPU is up to the task of accelerating the processing of these algorithms. The OpenCL framework makes it possible to run one implementation on different processing units, to provide perspective we will benchmark our implementations on both the GPU and the CPU and compare the results. A heterogeneous approach that combines the two above mentioned processing units will also be tested and discussed. The OpenCL framework is analyzed from a development perspective and what advantages and disadvantages it brings to the development process will be presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Jozefov, David. "Implementace algoritmu Seamless Patches for GPU-Based Terrain Rendering." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412848.

Full text
Abstract:
This master's thesis deals with terrain rendering using a modern algorithm for adaptive level of detail. It describes two currently most used graphical application interfaces and high-level libraries that use them and summarizes principles and features of several level-of-detail algorithms for terrain rendering. In more detail it then describes the implementation of Seamless patches for GPU-based terrain rendering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Jankauskas, Darius. "Spartus kintamo detalumo kraštovaizdžių atvaizdavimas: geomipmap algoritmo optimizacija." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2009~D_20101125_190806-77447.

Full text
Abstract:
Dažnai kompiuterinės grafikos taikymuose labiausiai visos scenos atvaizdavimo greitį įtakoja kraštovaizdžių atvaizdavimas. Darbe nagrinėtas vienas iš dviejų greičiausių šiuolaikinių kintamo detalumo kraštovaizdžių atvaizdavimo algoritmų – geomipmap. Šis algoritmas pasirinktas dėl platesnio pritaikomumo, didesnio algoritmo lankstumo ir šiai dienai dažnai didesnio greičio lyginant su jo varžovu geoclipmap. Algoritmo idėja – panaikinti kas antrą viršūnę iš vaizduojamųjų sąrašo, pereinant prie žemesnio detalumo lygio, taip sudarant galimybę laikyti visą geometriją keliuose nekintančiuose viršūnių ir indeksų buferiuose. Darbe pasiūlyta algoritmo optimizacija išnaudoja iš aukščių žemėlapio generuojamų kraštovaizdžių apatinę ribą. Pagrindinė idėja – nevaizduoti visų šioje riboje esančių trikampių, vėliau visus atsiradusius tarpus uždengti naudojant vieną plokštumą, ištemptą per visą kraštovaizdį. Nors optimizacijos efektyvumas priklauso nuo kraštovaizdžio formos ir negalima vienareikšmiškai įvertinti atvaizdavimo pagreitėjimo bendru atveju, nagrinėtose scenose užfiksuotas didelis našumo padidėjimas, aiškiai vertingas, net jei pasireikštų tik labai retais atvejais. Darbe taip pat parodoma, kad našumo padidėjimas iš tiesų galimas daugelyje scenų.
Often the bottleneck in the real-time 3D rendering applications is the rendering of a terrain. Geomipmap algorithm is analyzed here as it is one of the best terrain rendering algorithms known. It is much more versatile and flexible in comparative to geoclipmap and still often outperforms its main rival. The idea is to eliminate every other vertex in order to get a lower level of detail, thus having all the geometry in several immutable vertex and index buffers. An optimization is proposed that utilizes the lower bound of terrains generated from a heightmap. The idea is not to render the triangles that are exactly on the lower bound of a terrain and to cover all the eliminated triangles with a single plane. Even though the effectiveness of the optimization is highly dependent on the exact form of the terrain, the high performance gain observed in the analyzed scenes is valuable even if it would only occur in very rare cases. It is shown that the performance gain can actually be observed in quite a variety of scenes.
APA, Harvard, Vancouver, ISO, and other styles
24

Golipour-Koujali, M. "General rendering and antialiasing algorithms for conic sections : a GCE analysis." Thesis, London South Bank University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chernoglazov, Alexander Igorevich. "Improving Visualisation of Large Multi-Variate Datasets: New Hardware-Based Compression Algorithms and Rendering Techniques." Thesis, University of Canterbury. Computer Science and Software Engineering, 2012. http://hdl.handle.net/10092/7004.

Full text
Abstract:
Spectral computed tomography (CT) is a novel medical imaging technique that involves simultaneously counting photons at several energy levels of the x-ray spectrum to obtain a single multi-variate dataset. Visualisation of such data poses significant challenges due its extremely large size and the need for interactive performance for scientific and medical end-users. This thesis explores the properties of spectral CT datasets and presents two algorithms for GPU-accelerated real-time rendering from compressed spectral CT data formats. In addition, we describe an optimised implementation of a volume raycasting algorithm on modern GPU hardware, tailored to the visualisation of spectral CT data.
APA, Harvard, Vancouver, ISO, and other styles
26

Yuan, Ping. "A performance evaluation framework of multi-resolution algorithms for real-time rendering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60363.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chaudhary, Gautam. "RZSweep a new volume-rendering technique for uniform rectilinear datasets /." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04012003-141739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gurecká, Hana. "Vizualizace skalárních polí metodou back-to-front." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417159.

Full text
Abstract:
Diplomová práce je zaměřena na metody zobrazování skalárních dat v pevné datové mřížce, konkrétně dat získaných užitím fluorescenčního konfokálního mikroskopu. Teoretická část textu začíná představením fungování konfokálních mikroskopů a zasazení problematiky zkoumaných grafických metod do matematického kontextu. Následující kapitola se věnuje odvození integrálu pro zobrazování objemů a z něj vyplývající back-to-front metodu. Teoretická část je zakončena představením metod vhodných pro zobrazování trojrozměrných skalárních dat při použití back-to-front algoritmu. V praktické části je pak popsán implementovaný algoritmus.
APA, Harvard, Vancouver, ISO, and other styles
29

Bordoloi, Udeepta. "Importance-driven algorithms for scientific visualization." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1118952958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fuster, Criado Laura. "Linear and nonlinear room compensation of audio rendering systems." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/59459.

Full text
Abstract:
[EN] Common audio systems are designed with the intent of creating real and immersive scenarios that allow the user to experience a particular acoustic sensation that does not depend on the room he is perceiving the sound. However, acoustic devices and multichannel rendering systems working inside a room, can impair the global audio effect and thus the 3D spatial sound. In order to preserve the spatial sound characteristics of multichannel rendering techniques, adaptive filtering schemes are presented in this dissertation to compensate these electroacoustic effects and to achieve the immersive sensation of the desired acoustic system. Adaptive filtering offers a solution to the room equalization problem that is doubly interesting. First of all, it iteratively solves the room inversion problem, which can become computationally complex to obtain when direct methods are used. Secondly, the use of adaptive filters allows to follow the time-varying room conditions. In this regard, adaptive equalization (AE) filters try to cancel the echoes due to the room effects. In this work, we consider this problem and propose effective and robust linear schemes to solve this equalization problem by using adaptive filters. To do this, different adaptive filtering schemes are introduced in the AE context. These filtering schemes are based on three strategies previously introduced in the literature: the convex combination of filters, the biasing of the filter weights and the block-based filtering. More specifically, and motivated by the sparse nature of the acoustic impulse response and its corresponding optimal inverse filter, we introduce different adaptive equalization algorithms. In addition, since audio immersive systems usually require the use of multiple transducers, the multichannel adaptive equalization problem should be also taken into account when new single-channel approaches are presented, in the sense that they can be straightforwardly extended to the multichannel case. On the other hand, when dealing with audio devices, consideration must be given to the nonlinearities of the system in order to properly equalize the electroacoustic system. For that purpose, we propose a novel nonlinear filtered-x approach to compensate both room reverberation and nonlinear distortion with memory caused by the amplifier and loudspeaker devices. Finally, it is important to validate the algorithms proposed in a real-time implementation. Thus, some initial research results demonstrate that an adaptive equalizer can be used to compensate room distortions.
[ES] Los sistemas de audio actuales están diseñados con la idea de crear escenarios reales e inmersivos que permitan al usuario experimentar determinadas sensaciones acústicas que no dependan de la sala o situación donde se esté percibiendo el sonido. Sin embargo, los dispositivos acústicos y los sistemas multicanal funcionando dentro de salas, pueden perjudicar el efecto global sonoro y de esta forma, el sonido espacial 3D. Para poder preservar las características espaciales sonoras de los sistemas de reproducción multicanal, en esta tesis se presentan los esquemas de filtrado adaptativo para compensar dichos efectos electroacústicos y conseguir la sensación inmersiva del sistema sonoro deseado. El filtrado adaptativo ofrece una solución al problema de salas que es interesante por dos motivos. Por un lado, resuelve de forma iterativa el problema de inversión de salas, que puede llegar a ser computacionalmente costoso para los métodos de inversión directos existentes. Por otro lado, el uso de filtros adaptativos permite seguir las variaciones cambiantes de los efectos de la sala de escucha. A este respecto, los filtros de ecualización adaptativa (AE) intentan cancelar los ecos introducidos por la sala de escucha. En esta tesis se considera este problema y se proponen esquemas lineales efectivos y robustos para resolver el problema de ecualización mediante filtros adaptativos. Para conseguirlo, se introducen diferentes esquemas de filtrado adaptativo para AE. Estos esquemas de filtrado se basan en tres estrategias ya usadas en la literatura: la combinación convexa de filtros, el sesgado de los coeficientes del filtro y el filtrado basado en bloques. Más especificamente y motivado por la naturaleza dispersiva de las respuestas al impulso acústicas y de sus correspondientes filtros inversos óptimos, se presentan diversos algoritmos adaptativos de ecualización específicos. Además, ya que los sistemas de audio inmersivos requieren usar normalmente múltiples trasductores, se debe considerar también el problema de ecualización multicanal adaptativa cuando se diseñan nuevas estrategias de filtrado adaptativo para sistemas monocanal, ya que éstas deben ser fácilmente extrapolables al caso multicanal. Por otro lado, cuando se utilizan dispositivos acústicos, se debe considerar la existencia de no linearidades en el sistema elactroacústico, para poder ecualizarlo correctamente. Por este motivo, se propone un nuevo modelo no lineal de filtrado-x que compense a la vez la reverberación introducida por la sala y la distorsión no lineal con memoria provocada por el amplificador y el altavoz. Por último, es importante validar los algoritmos propuestos mediante implementaciones en tiempo real, para asegurarnos que pueden realizarse. Para ello, se presentan algunos resultados experimentales iniciales que muestran la idoneidad de la ecualización adaptativa en problemas de compensación de salas.
[CAT] Els sistemes d'àudio actuals es dissenyen amb l'objectiu de crear ambients reals i immersius que permeten a l'usuari experimentar una sensació acústica particular que no depèn de la sala on està percebent el so. No obstant això, els dispositius acústics i els sistemes de renderització multicanal treballant dins d'una sala poden arribar a modificar l'efecte global de l'àudio i per tant, l'efecte 3D del so a l'espai. Amb l'objectiu de conservar les característiques espacials del so obtingut amb tècniques de renderització multicanal, aquesta tesi doctoral presenta esquemes de filtrat adaptatiu per a compensar aquests efectes electroacústics i aconseguir una sensació immersiva del sistema acústic desitjat. El filtrat adaptatiu presenta una solució al problema d'equalització de sales que es interessant baix dos punts de vista. Per una banda, el filtrat adaptatiu resol de forma iterativa el problema inversió de sales, que pot arribar a ser molt complexe computacionalment quan s'utilitzen mètodes directes. Per altra banda, l'ús de filtres adaptatius permet fer un seguiment de les condicions canviants de la sala amb el temps. Més concretament, els filtres d'equalització adaptatius (EA) intenten cancel·lar els ecos produïts per la sala. A aquesta tesi, considerem aquest problema i proposem esquemes lineals efectius i robustos per a resoldre aquest problema d'equalització mitjançant filtres adaptatius. Per aconseguir-ho, diferent esquemes de filtrat adaptatiu es presenten dins del context del problema d'EA. Aquests esquemes de filtrat es basen en tres estratègies ja presentades a l'estat de l'art: la combinació convexa de filtres, el sesgat dels pesos del filtre i el filtrat basat en blocs. Més concretament, i motivat per la naturalesa dispersa de la resposta a l'impuls acústica i el corresponent filtre òptim invers, presentem diferents algorismes d'equalització adaptativa. A més a més, com que els sistemes d'àudio immersiu normalment requereixen l'ús de múltiples transductors, cal considerar també el problema d'equalització adaptativa multicanal quan es presenten noves solucions de canal simple, ja que aquestes s'han de poder estendre fàcilment al cas multicanal. Un altre aspecte a considerar quan es treballa amb dispositius d'àudio és el de les no linealitats del sistema a l'hora d'equalitzar correctament el sistema electroacústic. Amb aquest objectiu, a aquesta tesi es proposa una nova tècnica basada en filtrat-x no lineal, per a compensar tant la reverberació de la sala com la distorsió no lineal amb memòria introduïda per l'amplificador i els altaveus. Per últim, és important validar la implementació en temps real dels algorismes proposats. Amb aquest objectiu, alguns resultats inicials demostren la idoneïtat de l'equalització adaptativa en problemes de compensació de sales.
Fuster Criado, L. (2015). Linear and nonlinear room compensation of audio rendering systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59459
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Sievert, Andreas. "Tools and Algorithms for Classification in Volume Rendering of Dual Energy Data Sets." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-95286.

Full text
Abstract:
In the last few years, technology within the medical imaging sector has made many advances, which in turn has opened many new possibilities. One such recent advance is the development of imaging with data from dual energy computed tomography, CT, scanners. However, with new possibilities come new challenges. One challenge, that is discussed in this thesis, is rendering of images created from two volumes in an efficient way with respect to the classification of the data. Focus lies on investigating how dual energy data sets can be classified in order to fully use the potential of having volumes from two different energy levels. A critical asset in this investigation is the ability to utilize a transfer function description that extends into two dimensions. One such transfer function description is presented in detail. With this two-dimensional description comes the need for a new way to interact with the transfer function. How the interaction between a user and the transfer function description is implemented for Siemens Corporate Research in Princeton, NJ will also be discussed in this thesis as well as the classification results of rendering dual energy data. These results show that it is possible to classify blood vessels correctly when rendering dual energy computed tomography angiography, CTA, data.
APA, Harvard, Vancouver, ISO, and other styles
32

Mendel, Thomas [Verfasser], and Stefan [Akademischer Betreuer] Funke. "Improved algorithms for map rendering and route planning / Thomas Mendel ; Betreuer: Stefan Funke." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/1218078774/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fütterling, Valentin [Verfasser], Achim [Akademischer Betreuer] Ebert, and Bernd [Akademischer Betreuer] Hamann. "Scalable Algorithms for Realistic Real-time Rendering / Valentin Fütterling ; Achim Ebert, Bernd Hamann." Kaiserslautern : Technische Universität Kaiserslautern, 2019. http://d-nb.info/1196091048/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Bo. "Real-time Simulation and Rendering of Large-scale Crowd Motion." Thesis, University of Canterbury. Computer Science and Software Engineering, 2013. http://hdl.handle.net/10092/7870.

Full text
Abstract:
Crowd simulations are attracting increasing attention from both academia and the industry field and are implemented across a vast range of applications, from scientific demonstrations to video games and films. As such, the demand for greater realism in their aesthetics and the amount of agents involved is always growing. A successful crowd simulation must simulate large numbers of pedestrians' behaviours as realistically as possible in real-time. The thesis looks at two important aspects of crowd simulation and real-time animation. First, this thesis introduces a new data structure called Extended Oriented Bounding Box (EOBB) and related methods for fast collision detection and obstacle avoidance in the simulation of crowd motion in virtual environments. The EOBB is extended to contain a region whose size is defined based on the instantaneous velocity vector, thus allowing a bounding volume representation of both geometry and motion. Such a representation is also found to be highly effective in motion planning using the location of vertices of bounding boxes in the immediate neighbourhood of the current crowd member. Second, we present a detailed analysis of the effectiveness of spatial subdivision data structures, specifically for large-scale crowd simulation. For large-scale crowd simulation, computational time for collision detection is huge, and many studies use spatial partitioning data structure to reduce the computational time, depicting their strengths and weaknesses, but few compare multiple methods in an effort to present the best solution. This thesis attempts to address this by implementing and comparing four popular spatial partitioning data structures with the EOBB.
APA, Harvard, Vancouver, ISO, and other styles
35

Bonneel, Nicolas. "Audio and Visual Rendering with Perceptual Foundations." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00432117.

Full text
Abstract:
Realistic visual and audio rendering still remains a technical challenge. Indeed, typical computers do not cope with the increasing complexity of today's virtual environments, both for audio and visuals, and the graphic design of such scenes require talented artists. In the first part of this thesis, we focus on audiovisual rendering algorithms for complex virtual environments which we improve using human perception of combined audio and visual cues. In particular, we developed a full perceptual audiovisual rendering engine integrating an efficient impact sounds rendering improved by using our perception of audiovisual simultaneity, a way to cluster sound sources using human's spatial tolerance between a sound and its visual representation, and a combined level of detail mechanism for both audio and visuals varying the impact sounds quality and the visually rendered material quality of the objects. All our crossmodal effects were supported by the prior work in neuroscience and demonstrated using our own experiments in virtual environments. In a second part, we use information present in photographs in order to guide a visual rendering. We thus provide two different tools to assist “casual artists” such as gamers, or engineers. The first extracts the visual hair appearance from a photograph thus allowing the rapid customization of avatars in virtual environments. The second allows for a fast previewing of 3D scenes reproducing the appearance of an input photograph following a user's 3D sketch. We thus propose a first step toward crossmodal audiovisual rendering algorithms and develop practical tools for non expert users to create virtual worlds using photograph's appearance.
APA, Harvard, Vancouver, ISO, and other styles
36

Petring, Ralf [Verfasser]. "Multi-Algorithmen-Rendering : Darstellung heterogener 3-D-Szenen in Echtzeit / Ralf Petring." Paderborn : Universitätsbibliothek, 2014. http://d-nb.info/1048384845/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Jiajian. "Non-photorealistic rendering with coherence for augmented reality." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45749.

Full text
Abstract:
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization. Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.
APA, Harvard, Vancouver, ISO, and other styles
38

Hedberg, Vilhelm. "Evaluation of Hair Modeling, Simulation and Rendering Algorithms for a VFX Hair Modeling System." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-65592.

Full text
Abstract:
Creating realistic virtual hair consists of several major areas: creating the geometry, moving the hair strands realistically and rendering the hair. In this thesis, a background survey covering each one of these areas is given. A node-based, procedural hair system is presented, which utilizes the capabilities of modern GPUs. The hair system is implemented as a plugin for Autodesk Maya, and a user interface is developed to allow the user to control the various parameters. A number of nodes are developed to create effects such as clumping, noise and frizz. The proposed system can easily handle a variety of hairstyles, and pre-renders the result in real-time using a local shading model.
APA, Harvard, Vancouver, ISO, and other styles
39

Moreno, Valles Fernando Antonio. "Diseño de un algoritmo para rendering eficiente de estructuras proteicas de gran escala." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2014. http://tesis.pucp.edu.pe/repositorio/handle/123456789/5744.

Full text
Abstract:
El software de gráficos por computadora en 3D de hoy en día nos da la capacidad de modelar y visualizar objetos en situaciones o tamaños que antes no habría sido posible, incluso nos dan la capacidad de que la visualización de estos objetos sea generada en tiempo real lo que otorga la posibilidad de crear aplicaciones que hagan uso de esta capacidad para agregar interactividad con los objetos modelados. Es muy importante la capacidad de poder dotar al usuario de una capacidad de interactividad con el gráfico generado, pero esto no se logra si es que el tiempo de respuesta de la aplicación es muy grande, por ejemplo una consola de videojuegos exigen como mínimo 30fps (cuadros por segundo) un valor menor ocasiona que los movimientos no fueran fluidos y se pierda la sensación de movimiento. Esto hace que la experiencia de usuario fluida sea una de las metas principales del rendering interactivo. Uno de los mayores problemas que se encuentran en esta área es el de visualizar gran cantidad de polígonos, debido a limitaciones de memoria o capacidad de procesamiento, mientras mayor sea la cantidad de polígonos que se desea dibujar en pantalla, mayor será el tiempo de procesamiento que será necesario para generar las imágenes. Una aplicación en particular es el de visualización de la estructura de proteínas. Existen proteínas que poseen una gran estructura, por la cantidad de polígonos que se requieren para representar todos los elementos y conexiones que poseen estas moléculas y adicionalmente la necesidad de visualizar grandes cantidades de moléculas simultáneamente, ocasiona que se disminuya el rendimiento y la interactividad al momento de la visualización. El presente proyecto plantea utilizar una estructura algorítmica para realizar rendering eficiente de gran cantidad de proteínas haciendo uso de un visualizador 3D, que muestre la estructura tridimensional de estas y permita la interacción en tiempo real con el modelo. La estructura propuesta en este proyecto hace uso de la aceleración por hardware presente en las tarjetas gráficas modernas a través de un API de generación de gráficos en tiempo real que es OpenGL con el cual se aplican optimizaciones que aprovechan la estructura planteada. Para que el proceso de renderizado sea más veloz, se mantiene un número bajo de polígonos en los modelos. Debido a que los elementos son repetitivos (esferas y cilindros) se reutiliza la geometría de estos elementos haciendo uso de una estructura como el Scene Graph de modo que el uso de memoria sea menor y de otra estructura como el Octree que permite discriminar los elementos que deben ser procesados durante el rendering. Combinando todo lo mencionado anteriormente, la estructura propuesta permite que se visualicen proteínas de gran estructura o gran cantidad de estas, manteniendo el grado necesario de interactividad para facilitar su estudio así como también manteniendo un aspecto estético que permita reconocer los elementos sin reducir el rendimiento.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
40

Cook, Adrian Roger. "Neural network evaluation of the effectiveness of rendering algorithms for real-time 3D computer graphics." Thesis, Nottingham Trent University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lima, Rodrigo Freitas. "Reconstrução em 3D de imagens DICOM cranio-facial com determinação de volumetria de muco nos seios paranasais." Universidade Presbiteriana Mackenzie, 2015. http://tede.mackenzie.br/jspui/handle/tede/1465.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:37:58Z (GMT). No. of bitstreams: 1 RODRIGO FREITAS LIMA.pdf: 13768169 bytes, checksum: 153d5257eed9a0961aaeaac94e224f89 (MD5) Previous issue date: 2015-08-05
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Paranasal sinus are important objects of study to rhinosinusitis diagnostic, having some papers related incidence between asthma and allergic rhinitis.Many applications can calculate to various parts of the human body, getting a CT scan or MRI input, and returning information about the region of interest observed as volume and area. The accumulated mucus in the sinuses is one of the areas of interest that have not yet been implemented methods for the calculation of volume and area. In the present scenario, the patient monitoring is done visually, depending largely on perception of the evaluator. Therefore, we seek to implement more accurate metrics to facilitate medical care to the patient and it can help prevent the worsening of rhinitis in a given patient, developing mechanisms of visual and numerical comparison, where it is possible observe the progress of treatment. This work contains a detailed study of how certain existing techniques, combined into one methodology can segment and calculate the accumulated mucus in the maxillary sinus. In addition to techniques such as Thresholding, Gaussian filter, Mathematical Morphology, Metallic Artifacts Reduction during processing and segmentation, MUNC and DTA to calculate the volume and area, and visualization techniques as the Marching Cubes, it was also necessary some adjustments in the algorithm for limit the region of interest where the thresholding combined with the gaussian filter has not been effective of retaining edges. The application will use two open source platforms, one for processing, ITK, and another for visualization, VTK. The results demonstrated that it is possible to perform segmentation and the calculation with the use of platforms as well as the methodology used is adequate to solve this problem.
Os seios paranasais são importantes objetos de estudo para o diagnóstico de rinossinusites, tendo alguns estudos relacionado a incidência de asma na fase adulta a quadros de rinite alérgica na infância. Muitas aplicações atendem a diversas partes do corpo humano, obtendo de entrada uma tomografia computadorizada ou ressonância magnética, e devolvendo, muitas vezes, números que dizem respeito ao objeto de interesse observado, como volume e área. O muco acumulado nos seios paranasais é uma das regiões de interesse que ainda não tiveram métodos implementados para o cálculo do volume e área. No cenário atual, o acompanhamento do paciente é feito de forma visual, dependendo muito da percepção do avaliador. Portanto, busca-se a implementação de métricas mais precisas para facilitar o acompanhamento médico ao paciente e ajudar na prevenção do agravamento de um quadro de rinite em um determinado paciente, criando mecanismos de comparação visual e numérica, onde é possível observar a evolução do tratamento. Este trabalho contém um estudo detalhado de como determinadas técnicas existentes, combinadas em uma metodologia, podem segmentar e calcular o muco acumulado nos seios paranasais maxilares. Além de técnicas como a Binarizacão, Filtro Gaussiano, Morfologia Matemática, Redução de Ruídos Metálico durante o processamento e segmentação, MUNC e DTA para o cálculo do volume e área, e técnicas de visualização como o Marching Cubes, foram necessários também ajustes no algoritmo para limitar a área segmentada onde a binarizacão combinada ao filtro não foi capaz de manter as bordas da região de interesse. A aplicação fará uso de duas plataformas de código livre, sendo uma para o processamento, ITK, e outra para visualização de imagens, VTK. Os resultados demonstraram que é possível realizar a segmentação e o cálculo com o uso das plataformas, bem como a metodologia empregada é adequada a resolução deste problema.
APA, Harvard, Vancouver, ISO, and other styles
42

Balciunas, Daniel Alberto. "Traçado de raios interativo assistido pela unidade de processamento gráfico." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-03072007-182637/.

Full text
Abstract:
Conhecido pelo seu alto custo computacional e grande qualidade das imagens sintetizadas, o traçado de raios vem sendo mais recentemente explorado pela comunidade científica em pesquisas por uma taxa de quadro interativa e constante. Almejando um novo modo de aceleração do traçado de raios, uma nova abordagem denominada traçado de raios assistido pela unidade de processamento gráfico é apresentada neste trabalho. Seu objetivo é dar base à formulação de algoritmos que façam um uso melhor dos recursos disponíveis nas placas de vídeo frequentemente encontradas nos computadores convencionais atuais. Com base nesta abordagem, várias contribuições são propostas nesta dissertação. Além de apresentar conceitos básicos de traçado de raios e uma revisão literária de seus tópicos mais importantes, este trabalho também explica e exemplifica alguns algoritmos clássicos de traçado de raios que serão utilizados como base para outros algoritmos aqui apresentados. Como principal contribuição é proposto e implementado um algoritmo, que calcula os pontos iniciais de varredura de subdivisões espaciais do traçado de raios primários de mapas de alturas a partir de um distance-buffer (mapa de distâncias) sintetizado pela unidade de processamento gráfico da placa de vídeo. Um segundo algoritmo é também proposto, onde um object-buffer (mapa de objetos) é sintetizado pela placa de vídeo para acelerar a varredura de estruturas de subdivisão espacial em cenas com primitivas genéricas do traçado de raios. Contribuições pontuais são realizadas neste trabalho no campo de síntese de mapas de alturas pela definição dos seguintes algoritmos: o algoritmo de reconstrução bilinear analítica, o algoritmo de interpolação biquadrátrica dupla, o algoritmo de predição por planos de altura inclinados e o algoritmo de mapeamento de nível de detalhe de reconstrução da superfície para o modelo de voxels. Uma breve discussão a respeito do futuro de algoritmos de traçado de raios assistido pela unidade de processamento gráfico e de sua implementação em aglomerados gráficos é apresentada no final deste trabalho, explorando novas possibilidades para a sua continuidade, desencadeando novas linhas de pesquisa correlacionadas.
Known by its high computational cost and by the high quality rendered imagens, ray tracing has been most recently explored by the scientific community in researches for interactive and constant frame rate. Aiming for a new way for optimizing ray tracing, a new approach called GPU-assisted ray tracing is defined in this work. Its objective is to be a first step in the formulation of ray tracing algorithms that take better advantage of graphics processing units commonly found in personal computers nowadays. Based on this approach, several contributions are proposed in this work. Besides presenting the basic concepts for ray tracing and a literature review of the most relevant topics, this work also explains and exemplifies some classical algorithms that are used as a base for the new algorithms here presented. As main contribution, we propose and implement an algorithm that calculates the initial points for traversing spatial subdivision structures, for tracing primary rays in height maps from a distance-buffer rendered by the video card. A second algorithm is proposed as well, where an object-buffer is rendered by the video card to accelerate the traversal of rays in spatial subdivision structures for scenes with generic primitives. Individual contributions are made, in this work, in rendering height maps by defining the following algorithms: the analytic bilinear reconstruction algorithm, the double bi-quadratic interpolation algorithm, the prediction by inclined height planes algorithm and the level of detail mapping for surface reconstruction in voxel-based models algorithm. A brief discussion about the future of GPU-assisted ray tracing algorithms and its implementation in graphical clusters is presented at the end of this work, exploiting new possibilities for its continuation and for related research topics.
APA, Harvard, Vancouver, ISO, and other styles
43

Habigt, Julian Albert [Verfasser], Klaus [Akademischer Betreuer] Diepold, Klaus [Gutachter] Diepold, and Walter [Gutachter] Stechele. "Hole-Filling Algorithms for Depth-Image-Based Rendering / Julian Albert Habigt ; Gutachter: Klaus Diepold, Walter Stechele ; Betreuer: Klaus Diepold." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1221279750/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Loyet, Raphaël. "Dynamic sound rendering of complex environments." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995328.

Full text
Abstract:
De nombreuses études ont été menées lors des vingt dernières années dans le domaine de l'auralisation.Elles consistent à rendre audible les résultats d'une simulation acoustique. Ces études se sont majoritairementfocalisées sur les algorithmes de propagation et la restitution du champ acoustique dans desenvironnements complexes. Actuellement, de nombreux travaux portent sur le rendu sonore en tempsréel.Cette thèse aborde la problématique du rendu sonore dynamique d'environnements complexes selonquatre axes : la propagation des ondes sonores, le traitement du signal, la perception spatiale du son etl'optimisation informatique. Dans le domaine de la propagation, une méthode permettant d'analyser lavariété des algorithmes présents dans la bibliographie est proposée. A partir de cette méthode d'analyse,deux algorithmes dédiés à la restitution en temps réel des champs spéculaires et diffus ont été extraits.Dans le domaine du traitement du signal, la restitution est réalisée à l'aide d'un algorithme optimisé despatialisation binaurale pour les chemins spéculaires les plus significatifs et un algorithme de convolutionsur carte graphique pour la restitution du champ diffus. Les chemins les plus significatifs sont extraitsgrace à un modèle perceptif basé sur le masquage temporel et spatial des contributions spéculaires.Finalement, l'implémentation de ces algorithmes sur des architectures parallèles récentes en prenant encompte les nouvelles architectures multi-coeurs et les nouvelles cartes graphiques est présenté.
APA, Harvard, Vancouver, ISO, and other styles
45

Hörr, Christian. "Algorithmen zur automatisierten Dokumentation und Klassifikation archäologischer Gefäße." Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-71895.

Full text
Abstract:
Gegenstand der vorliegenden Dissertation ist die Entwicklung von Algorithmen und Methoden mit dem Ziel, Archäologen bei der täglichen wissenschaftlichen Arbeit zu unterstützen. Im Teil I werden Ideen präsentiert, mit denen sich die extrem zeitintensive und stellenweise stupide Funddokumentation beschleunigen lässt. Es wird argumentiert, dass das dreidimensionale Erfassen der Fundobjekte mittels Laser- oder Streifenlichtscannern trotz hoher Anschaffungskosten wirtschaftlich und vor allem qualitativ attraktiv ist. Mithilfe von nicht fotorealistischen Visualisierungstechniken können dann wieder aussagekräftige, aber dennoch objektive Bilder generiert werden. Außerdem ist speziell für Gefäße eine vollautomatische und umfassende Merkmalserhebung möglich. Im II. Teil gehen wir auf das Problem der automatisierten Gefäßklassifikation ein. Nach einer theoretischen Betrachtung des Typbegriffs in der Archäologie präsentieren wir eine Methodologie, in der Verfahren sowohl aus dem Bereich des unüberwachten als auch des überwachten Lernens zum Einsatz kommen. Besonders die letzteren haben sich dabei als überaus praktikabel erwiesen, um einerseits unbekanntes Material einer bestehenden Typologie zuzuordnen, andererseits aber auch die Struktur der Typologie selbst kritisch zu hinterfragen. Sämtliche Untersuchungen haben wir beispielhaft an den bronzezeitlichen Gräberfeldern von Kötitz, Altlommatzsch (beide Lkr. Meißen), Niederkaina (Lkr. Bautzen) und Tornow (Lkr. Oberspreewald-Lausitz) durchgeführt und waren schließlich sogar in der Lage, archäologisch relevante Zusammenhänge zwischen diesen Fundkomplexen herzustellen
The topic of the dissertation at hand is the development of algorithms and methods aiming at supporting the daily scientific work of archaeologists. Part I covers ideas for accelerating the extremely time-consuming and often tedious documentation of finds. It is argued that digitizing the objects with 3D laser or structured light scanners is economically reasonable and above all of high quality, even though those systems are still quite expensive. Using advanced non-photorealistic visualization techniques, meaningful but at the same time objective pictures can be generated from the virtual models. Moreover, specifically for vessels a fully-automatic and comprehensive feature extraction is possible. In Part II, we deal with the problem of automated vessel classification. After a theoretical consideration of the type concept in archaeology we present a methodology, which employs approaches from the fields of both unsupervised and supervised machine learning. Particularly the latter have proven to be very valuable in order to assign unknown entities to an already existing typology, but also to challenge the typology structure itself. All the analyses have been exemplified by the Bronze Age cemeteries of Kötitz, Altlommatzsch (both district of Meißen), Niederkaina (district of Bautzen), and Tornow (district Oberspreewald-Lausitz). Finally, we were even able to discover archaeologically relevant relationships between these sites
APA, Harvard, Vancouver, ISO, and other styles
46

Borikar, Siddharth Rajkumar. "FAST ALGORITHMS FOR FRAGMENT BASED COMPLETION IN IMAGES OF NATURAL SCENES." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4424.

Full text
Abstract:
Textures are used widely in computer graphics to represent fine visual details and produce realistic looking images. Often it is necessary to remove some foreground object from the scene. Removal of the portion creates one or more holes in the texture image. These holes need to be filled to complete the image. Various methods like clone brush strokes and compositing processes are used to carry out this completion. User skill is required in such methods. Texture synthesis can also be used to complete regions where the texture is stationary or structured. Reconstructing methods can be used to fill in large-scale missing regions by interpolation. Inpainting is suitable for relatively small, smooth and non-textured regions. A number of other approaches focus on the edge and contour completion aspect of the problem. In this thesis we present a novel approach for addressing this image completion problem. Our approach focuses on image based completion, with no knowledge of the underlying scene. In natural images there is a strong horizontal orientation of texture/color distribution. We exploit this fact in our proposed algorithm to fill in missing regions from natural images. We follow the principle of figural familiarity and use the image as our training set to complete the image.
M.S.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
47

ur, Réhman Shafiq. "Expressing emotions through vibration for perception and control." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.

Full text
Abstract:
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
Taktil Video
APA, Harvard, Vancouver, ISO, and other styles
48

Thanawala, Rajiv P. "Development of G-net (a software system for graph theory & algorithms) with special emphasis on graph rendering on raster output devices." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834618.

Full text
Abstract:
In this thesis we will describe the development of software functions that render graphical and textual information of G-Net(A software system for graph theory & algorithms) onto various raster output devices.Graphs are mathematical structures that are used to model very diverse systems such as networks, VLSI design, chemical compounds and many other systems where relations between objects play an important role. The study of graph theory problems requires many manipulative techniques. A software system (such as G-Net) that can automate these techniques will be a very good aid to graph theorists and professionals. The project G-Net, headed by Prof. Kunwarjit S. Bagga of the computer science department has the goal of developing a software system having three main functions. These are: learning basics of graph theory, drawing/manipulating graphs and executing graph algorithms.The thesis will begin with an introduction to graph theory followed by a brief description of the evolution of the G-Net system and its current status. To print on various printers, the G-Net system translates all the printable information into PostScript' files. A major part of this thesis concentrates on this translation. To begin with, the necessity of a standard format for the printable information is discussed. The choice of PostScript as a standard is then justified. Next,the design issues of translator and the translation algorithm are discussed in detail. The translation process for each category of printable information is explained. Issues of printing these PostScript files onto different printers are dealt with at the end.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
49

Hošek, Václav. "Distributed Ray Tracing." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235956.

Full text
Abstract:
VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ Distributed Ray Tracing, also called distribution ray tracing and stochastic ray tracing, is a refinement of ray tracing that allows for the rendering of "soft" phenomena, area light, depth of field and motion blur.
APA, Harvard, Vancouver, ISO, and other styles
50

Rypák, Andrej. "Raytracing virtuálních grafických scén." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236470.

Full text
Abstract:
This thesis is dedicated to ray tracing based rendering methods, primarily the original ray tracing. Besides introducing a brief historical overview of algorithms from the family, it presents all the essential tools, techniques and physics needed for designing a rendering application in detail. A significant part of the document consists of an implementation of a photorealistic rendering application for interactive graphics 3D virtual scenes. The focus is on rendering without using any additional model information. The thesis includes descriptions and explanations of specific problems and their solutions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography